var/home/core/zuul-output/0000755000175000017500000000000015145107124014525 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145123076015475 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000324300515145122707020261 0ustar corecoreǥikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB QFHtufX]]6f;l6? ow S??xI[mEy},fۮWe~7Nû/wb~1;ZxsY~ݳ( 2[$7۫j{Zw鶾z?&~|XLXlN_/:oXx$%X"LADA@@tkޕf{5Wbx=@^J})K3x~JkwI|YowS˷j̶֛]/8 N Rm(of`\r\L>{Jm 0{vR̍>dQQ.aLk~g\UlxDJfw6xi1U2 c#FD?2SgafO3|,ejoLR3[ D HJP1Ub2i]$HU^L_cZ_:F9TJJ{,mvgL;: ԓ$a;ɾ7lַ;̵3](uX|&kΆ2fb4NvS)f$UX dcю)""û5h< #чOɁ^˺b}0w8_jiB8.^s?Hs,&,#zd4XBu!.F"`a"BD) ᧁQZ-D\h]Q!]Z8HGU=y&|'oZƧe7ΣԟRxxXԨkJ[8 ";ЗH F=y܇sθm@%*'9qvD]9X&;cɻs0I٘]_fy tt('/V/TB/ap+V9g%$P[4D2L'1bЛ]\s΍ic-ܕ4+ޥ^.w[A9/vb֜}>| TXNrdTs>RDPhإek-*듌D[5l2_nH[׫yTNʹ<ws~^B.Ǔg'AS'E`hmsJU # DuT%ZPt_WďPv`9 C|mRj)CMitmu׀svRڡc0SAA\c}or|MKrO] g"tta[I!;c%6$V<[+*J:AI \:-rR b B"~?4 W4B3lLRD|@Kfځ9g ? j럚Sř>]uw`C}-{C):fUr6v`mSΟ1c/n߭!'Y|7#RI)X)yCBoX^P\Ja 79clw/H tBFKskޒ1,%$BվCh,xɦS7PKi0>,A==lM9Ɍm4ެ˧jOC d-saܺCY "D^&M){ߘ>:i V4nQi1h$Zb)ŠȃAݢCj|<~cQ7Q!q/pCTSqQyN,QEFKBmw&X(q8e&щu##Ct9Btka7v Ө⸇N~AE6xd~?D ^`wC4na~Uc)(l fJw>]cNdusmUSTYh>Eeք DKiPo`3 aezH5^n(}+~hX(d#iI@YUXPKL:3LVY~,nbW;W8QufiŒSq3<uqMQhiae̱F+,~Mn3 09WAu@>4Cr+N\9fǶy{0$Swwu,4iL%8nFВFL2#h5+C:D6A@5D!p=T,ښVcX㯡`2\fIԖ{[R:+I:6&&{Ldrǒ*!;[tʡP=_RFZx[|mi ǿ/&GioWiO[BdG.*)Ym<`-RAJLڈ}D1ykd7"/6sF%%´ƭ*( :xB_2YKoSrm_7dPΣ|ͣn/𚃚p9w#z A7yTJ$KOL-aP+;;%+_6'Sr|@2nQ{aK|bjܒ^o(מO80$QxBcXE ء\G=~j{Mܚ: hLT!uP_T{G7C]Ch',ެJG~Jc{xt zܳ'鮱iX%x/QOݸ}S^vv^2M!.xR0I(P 'fΑQ)ۢWP Pe>F=>l |fͨ3|'_iMcĚIdo阊;md^6%rd9#_v2:Y`&US tDkQ;>" ء:9_))wF|;~(XA PLjy*#etĨB$"xㄡʪMc~)j 1駭~բ>XiN .U轋RQ'Vt3,F3,#Y3,kJ3,LhVnKauomˠ_>2h-/ ђ(9Uq EmFjq1jX]DןR24d 3[n )ܗKj/jUSsȕD $([LH%xa1yrO.|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@ew>w3C=9k-{p>րd^T@eFZ#WWwYzK uK r؛6V L)auS6=`#(TO֙`mn Lv%7mSU@n_Vۀl9BIcSxlT![`[klzFض˪.l >7l@ΖLl gEj gWUDnr7AG;lU6ieabp៚U|,}S@t1:X _ .xI_7ve Z@7IX/C7@u BGڔE7M/k $q^hڧ};naU%~X!^C5Aw͢.@d!@dU}b? -ʏw |VvlK۴ymkiK% 0OFjT_kPW1mk%?\@R>XCl}b ,8; :.b9m]XaINE`!6uOhUuta^xN@˭d- T5 $4ذ:[a>֋&"_ }Oõϸ~rj uw\h~M il[ 2pCaOok.X0C?~[:^Pr򣏷y@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ  l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q /} 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6 g`_w\|8Fjȡstuf%Plx3E#zmxfU S^ 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3SX~߃ʟ~㍖›f!OI1R~-6͘!?/Vvot4~6I@GNݖ-m[d<-l9fbn,'eO2sٟ+AWzw A<4 }w"*mj8{ P&Y#ErwHhL2cPr Wҭюky7aXt?2 'so fnHXx1o@0TmBLi0lhѦ* _9[3L`I,|J @xS}NEij]Qexx*lJF#+L@-ՑQz֬]")JC])"K{v@`<ۃ7|qk" L+Y*Ha)j~pu7ި!:E#s:ic.XC^wT/]n2'>^&pnapckL>2QQWo/ݻ<̍8)r`F!Woc0Xq0 R' eQ&Aѣzvw=e&".awfShWjÅD0JkBh]s9Ą|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧz*j{.OߞQ")L{2%f1۔bXR)ʤw963Tx3 <]5[| HQh4[R" uQiPD9Ƞy#Oka},F%fN 72dº6M[a$b=븃bt~c.!R <Cڢu.SEE٬,RkycJoG2s,IgkTY4t2 @!27A`̕u%Yd.L /..p>o!Ϸ_R)e˘LPW5fcM T=<~XeHdmei>Q}F{P_O5^Ock xKAW$na"wB,#iE9Lu CGº,4<7e9cwKxëUAOY 1y_ =JB)aM1!I!c! -k5\~ylϲer:W 0j7@]D)y]9vŵ!\0ܨOOR Ph'«Wcx}Tٓ1YZ;'T*>nsAxrYH xA (5nug=b2Z 9 9`^O杙~t</yVeH##BgJmL%C==aTSf7{҇7}r.P|pTzzzh5~KYvxH]09=0:m_ vR `p=Nۣ߆ A$@p=Nu>MPZhr5|FpG6ݑTkr5ҝu3¬#d69;Y Bh`lM}"됎*e"@$y{(g\p7AZh ޹G17dNu&eMס>TJ6w@)k {ZႷɱ6I;ޣP(uY$cr/Aߙs՞Մ`M*:`̬BP qUSN YAuwqI5s1LА18e"A4fξ}?ke`wRߥyUɊl8Z¿չ̓сxtA%Оy"ؤ$xwi2|2$o  .];2*b YW5"`RJ)8,?WPUYA"(RO]2W3ցAgmoWE t 7 ! c}M k Cs9B%yhNɀ|hyW [4v%;:g>uѨ[NeJYźywنu=.Vd i͚\DꗜLȻEEX<>ɻGw[{w`..5]%WVJ=e̛Ps%H / !z"U28Po߽8xj#;4/+< X=PQ{**NHM YJlN{ @.8Y+`\Hw3dTPEK$ygs_M۟1 joZQ;G%Vqšf^ya4bIxST}>89c-[nf)U1kc W M` 7-: rBv2T2)Ӎ^9Ma~0ӸܞԥYOnq[5ɼOQ; P}69<5stcy+uuț{LmW$"5kS}?9h> HHDŽ02Hz3Mp:,uw#[DqB6':;ߑ۔lkrOd}D=wrF (( Sοti6SjٝMkI5'դ1X֞:y RxH.<ߥZr9Yg0wW:-5F1CڣȐ5Wc4z⛘<$:P~]g"@hƒy6|;suoG'%=OF޽ȲQ |Z >jr>&+-Nӧ`$tYۗmI럟vr}GXKYt([ejh ڒzw<:=j?g|z>Z͋\h{hA{ 04F<6AYsNwo}żlƙP @EP{WJ_`yV,;99ލׇ.k?F5H̪u|doV kH o,Q8^-΀kWP r "` i;i5-Q}zu|doBD޵d"XSB yTJI7P6n~*$e>P$'г 3-;xy|۲47EKK%,U܃9ʬcuwƉU''V`}Xb K}Qu3c1ȜjE&ԬoԴ!#oSDk"E:pqUa ^b#~XSBzT})`Ѭ Ǻ (@q nF4ܶt|AF fX@ՅM!Fr{TQ16Z=.|8D֨pԝUaK $ct~okݼO~f ^HEK^iҎFo7Oj'wUٌ.gcN^o\fPcN5ly(N" $M~?Mm<^^ %'^vE=0t8V|L?_ʰobJwQ^-CDyN KEߟY7_̮JLRTc &I~ӿ^4k88y3oڷc,vT<09쎊X5"}WI2IɝRBE]BZ#Th\hLЩ_ڔä>tgc_ E=%BJZZt5<_ɯ?]|"wW>;ٍ;+CU _n.ŽBԵ|{ěNfELl}bAݒJirJDիyR1}?5xE;(Z,1hA'OjD#GVi qZ,pf[d{2RܣR[mun~+~ISuȜfivId YO)c XrGMVUz:d@H5FI+ < OY>54n'R`RH+ d¼eb+M B)|*ZY8jz*d*)%Ϧ83I@6<8g#Y+yD̠p7yPBȐ|X>]<ˉjRI֏')z#;T>nn& lθ!$5/=%HD2fQ60NULy8Ǭ E[-S*|oãUrOt*bX؅#$wz )YH!|L:x Oڀ-_H (LQT6"*q2 iYiDHMק1ML4BZ$@z䑖6Ws7ѝ }TR*S?^sqٺ3Mh>屢Q7 |*yJBHKE)AFzL )fT>kU⢞G%|Ix;I1vnMdэ>o ү|rIaJ+U'Ed[1Vڄ[nUgu$-B6-[^7 |Xpn2}nr CC5F`J `rKJ;?28¢E WeBhF[|ݩSRO3]J-҅ 1,j2Y QuH Ύ]n_2a62;VI/ɮ|Lu>$0&*m.)HzzBvQ0h} -_>7^nya+CTm>C9|H nHe">1]8B*0)QsU·8t^O+mXU-q6EDö8^R) hct{d}ܜFnԴ.2w⠪R/#r| w,?Vo7o}Cw멷5k7v;;64I+OtfI|RM+T>y:1V(!L7,R3PEdջ;)Q_EAVح m3 o\` sHc3 vq\ .,ĀU|⦍߷/*~4âF,#[:XYIpʼn)dk!J'Z5=r!{ (y*␜*߾rCT.ӔD[%s1,jЅ@k0Ցu֯dtKl$Y5O*GUڇvI`b0ο0qoI`b#FOf_$0[!i rSwvҍ%E Ec|U9F-)L)ŘF`U:VK jeFrԋwEDYpԽ-D\dNyj荊=EEg[bÔF˩ք%EGƶ*NX)Hc(<|qDOޯr^3>Uf1w;mCja:-1_kٚ%VbZ˙#G6 `q+MPU~l!.We$9{ -.D087?1a@P5B,ݖc}jc઱ fk}sR~  à5Q}llK`;b6a>@'@—>9V*E":v)e{KK{&0ݙAOu }nHf[+.4lX cU82IP qoz[X=>׻K߉J%E92g [ҙ%rXgs+"sc9| ]>T]"JNرWB-zJvS-~z30G.@U#=i7) ^EUB Q:!9W΀çM{?`c`uR=ljצiC9U"9URvÞr61y? jO]5_C!p| vä+*l:\APj|L"}lRg]nWPlf@ Ve`D~ڇŁQLLkY ZPKoa_u` 1>Z;3FRdEB n+0Z ?&s 6$E|<ޭLk1Yn(F!%sx]>C\l9"و- |yր|/#.w0ޓx!khD?qO`-9c| &8֨OȸVH5uH)28 Ǿ-R9~ +#e;U.]aD6GXzqd5y &;)VKL]χ@b OIAG Lmc [2;\d˽$MuWmCEQuabAJ;`uy-u.M!9VsWٔ Rs`S#m8k;(WAXq 8B+o?R@+' 8U˜z+ZU;=eT|X-!9U-q .AV/|\甝%&$]1YINJ2]:a0OWvI.pH6xMY0/M$ *s5xgs͙L3G$)ՖbG(}1wt!wVf;L|1ivGgJ+u7.c 6 ݹJ$)}T@ nS*X#rv6*;WJ-_@q)+?DK7Pll{f_WJ|8(A ä>mlN"jV;/-R9~ {]'##AA:s`uih _F% [U۴"qkjGX6)_x.8ͼxD(k:_\X%[p70\jۧb>h9U|A܌ЃECTC Tnpצho1=V qy)U cis^>sgvO"4N9WiI NRCǔl X1Lb.uD`X}nl}1:ViI[/SE un޷WQFa2V[%ZniE|nZ9&-I,t*qȎlo Lhnٓw'Xm R ύ-~ά}hs\5\T%;}]|Bޙ>ǖߔ 3\ a-`slԵ.怕u7ːزV|A\Qu&'9~ l}`pΕ [Q =rvQu0 M.1%[vRat/ Px"3Qc /[?$ PW*3'X liWv-6W&)cX |]O;C%8B*Z1%8Gk@5^NtY"Fbi8F'+1&1 7YSD1౱UR}UR,:ơ2lvc& GHwMlF@a5+F>ΰ-q>0*s%Qe掟^ CZQ&\b8$4Nve^آ]I^bKߙԘ1"+zH 1Հm -XıUXFr\A-2)  RG킖h?.āUŔy[j_ӂ~ яA弆^bDyz8ǖQ8`jXcsK?58,?YP5䜭nu9YFznTEf3Ja\,@:,?WYؾNr<V` =V[B5!Z\ļǪЎr8B*ucѡv8\[|s L-+y{5KDdzp `rO"mũɸHd"yc Pu>x2;W`VR<aۗ&D<=i-Rר|r _;ǖڽc?ߖ]G@Ֆ;UQG1 g3Jە Q88ASUȿ1:WѥLf2)b>~waIuR{uMߍ}7.P\zoo^T7Wy^TW\6U0)l?Z_Mñf/~ZTuusn *~x2Ϸ?sHH?Et _I`[>?a`57X Ox!8+J&[V =͋A,z`s,J|L/rʑ߽|IhM4fGQIc{6#_֓ 8i4mns "D֍$z8w(Ɏ&Q[7jζ~$p8r&-p<&Dr8 cUM;9gوIVA$6Iz eR%2~Dƪ0hA,ƪ$rwA WaXu U!JʀwDYI%Jrl!*J%R.E*oFk=hfeؔUnLegYr< <$*LGM|h D<xTcUe䑚vf<-uT=HcUf›ZEXn&&Nsl'?, WW5zD 1>KN8pLLǰE5EVS</IxHɽ9NX% d|O OIي='n8Q; ^_Xe/߶"2Z{ eqY4v/+b[n,csuF,?eqxpczY\Efl[AgmqX" 5!4n"&. lBxW,ke=M9Q|y^?+Y*/_w}CF1"oR x` c£Of^w#5#'wv /H'g܋B$T.RTCPa$=}Oay{Iɹ)xʵexnنOi0}/w%x|Kj]ayEu6e9YL\Z`SQ&ېR]o=,?*? `%*կ_0>`)21I _.1:Ww~>MкJ O JIa\=%`dNNڟ2TG4jXzY'g䕁/1?4lױ'G}>LmYORdT.q+䑇 ѳ41K+i Wlݟ ł)mW<:.iQP,sg('yu5@QWޝ,/%Kq̻š;c9SWUx K\~NN$4E읁v;m}/PHkyx[֔fա#:Omt̟D [ZᆰONh+ =7MEtބm'5SWL:9>Cv/´AQqҏ 7Q-^^RB$"eRR$SwNhcT% Gx(MobDu}E-(?]qEPa}7$$X7\3p;#\, $7AX? p{9px)=`GQɦ/u߰RY3wwͨd*Lq\DM7,(oq*6E2z\joH܁5:F%ꭒY(tw&ׂQjH(MBնnZ/O0dcP>kr4@a ) ӿ\H|D(#eݺ} Cf(!5<\v:|O)':ȍ.3U laְ  RH:%R1ɋ <%D:e|8(ǩ60h.v@sU(MJ[+c5 tYDw2Fk T[jVٯ|ÁBۿ6鸝ӆSmϹ ~;0M*69S,]mշ(|`jy\Up gũNbz@) yR ߎީeؼ]>|z!T d8* lDm4sNDv*g釓( {N#TscaÉxU ֋ㅈ.qWIud0"ʴ]!R6yΊj!k|iCVd,oY (X u& ++Ն5d<ۆm7|=˄K3uMD dflz훠8&˯]ݣ[}܇nM~y&ov?h{֦דɯ~{=1 ƹ DiPAji*7bPI:e~ɦ2UNq7w)E1NH]\1$KDڧzq%…1K+ТؙYfG"9<8JM[3X(<(&AAf:jm9kpKbR͎ò=aq`O8Z݋i}Ya=ˤm+ e]< v_Uwh(>ڈۘ(ù+Hvm.o 8DKR5do{e>!p y4Fy5T' p69,(cTrϳV9:?k=$zx3e$b6ˣKl9jcw)+GPيO PG\ĬIC}bϡ߽Ei,mIL9bm-:۲j:eIݖn2M)dyt3Sہ<ؽnjK 60 omⱽwnZi#k FtRSyajj$s@KkVemAbdލՒ UKzщS9:[`ĥ܈\6\PWc XȦBʳK-Y N\QO%% eI5 #0QoOtcIokxM*#N툫+߲]G7ubɪV1&dcfji>Hu1>gE[m'#Z\獕"x -u<2|a X .DZN8j|bQM]D!&M8">Մ=V$CE-ޟ4L2Ӎ^Y \A-q ,6VMɐX!{Xݯ Tt޺ZWDs> D %T24ڝ<=!jXW.;iS-6} z"zKǰ w/-)W`UvK5T@4\zhimKypm=65W (prͺNJTqWkæº"EpVI"XĢLqVT>BLΐ߂-:1ޞ#a}Zp,SOFsxqlX8nc[  ]. bY|Q3~KwR؇"%.KSx%m)1 Ee|6D^:/J]5(=W}-cXt=}F}"H"eHajI&}PXStZlVuť*R  U(!jQ±z`à iukhFr&XS`XXZ0#s%¶Ynj:~7׋؆0,J}7 'q$qsko_"AZgf%:О^g둋ߦ:%-e-z TCDOtn±hne5P8Ԗ::dh_-0E5DlŴAl"lh>據CZj" 퐚2^bSɨ80j!|~Ήk_F_iSݗaVOmP{7-ڈYlÎwPUjS4:ٗԱ8Г:QnC\k6#l& "p5b;EQo2O(.b[v6uR[m` 6$xLY[?&)<>^FU<}+"dߎڪ7Z(yϳWo'T>ouUg*ni'tcwӬz-p c^DG~N10B0u*]!\fejJ6;5ZeŮwNƱ ) ͧݻ3ap6+PWjӟ(r?"2dS6xeof=z6ѭRx Q,s$X/!o޿9:p?:3}8=3(x N:5,fs4aH11%_Tpom`sL-**g> } xr!R52O0𼌋;+OSB[H*u'!ޯDKvGq^a1"N:.|ЯR_w7wCjo]9= ,hH }'BMMrCXu޸4yue4)6 [6W3ս|Y9R1("G[PƸ_x9mO71s)w$PK,NG ,0󈅑8jm`k\8T+j|~+ohG1[ "ޣӷ䚑h ;v(Y#s0Yv'Xs[}L ِΛ{H+OqPo_^E_P|N`FC(Ǥ(shT_ fWaIpɈIc WjHd3 yȺ ,0VÃubC@>lEZAզZ>@-yngfIɦo(T"Vy9i6C Y\£p:Iet8%y7:H&J˻ΉQ ۥ.p4'nCL|]M;Kv s/^Ъwl>w'WP v'ԎLAi9;A#UWGQ(Ɛ. .rLx`73*2ϧA"吩AD/O}w(fb K?p8 ]_gÀy|=/p襩B2R6t$L4h3H  <=hWxVǂkohc` ?e/@obQRޛd]Uożp1 7n<_Ѐk5X)$Gu͞p]a޲@񐇒"p$ 9'寈՘]k}B}ޗxw+j",{3PbV1MOx\fn!x2kUWnnP-4Ehc"oQG 'ڿ F]EZ|-(`}8wUPHpLR;>Cر{pw]J-߁X;q=);*V ;*v'TΊe]xF^!"/3K>3"Y %\V|S.%/q Ó'GƃS'6F|DyO/| 4>(OIBA}E?̿8'.y^f)<) :86hWRAmi1F'2DU#)Yb4%@;Dv 뭬mNwt|&#El !Y2# 㡽u*_||'r©~4Ieo4r'Y?M j L혶Rh3? rD*@|~hmYُ=/J <_{@-g<_fx>1N3mLM/KZ {[?urm~j<IPy$e\։h;L ˦UV#}`V+q﮴"f:hW0xY^NF_mMboxKV.1<55:MLw #%"N2xO9ǧY2+C/p!0L[ ׺Ef}!h{Lx4o͆O/X`!Uqԃ4F@oP+'2L&Xo;+-GcW MO?o\*wҽkYhiK~ΆI\lM?^lLA:^*y A󀫋,g 'SC8tMLVZ&/ao,8d_2gΫkP.v1hWl?k`mAq~PZLw{'X\5kC iIy7N iV!?jfR6%j,N k"Kfp[[\py e "P([CZbsGM WBZ$a+\଄-Fе^;w:f5m755쀩|}炌oۃ[dӉƽ夜Z궁m<,4Dk0eV@[AyvᤓM\fA&vX1XLU<䵹c4eeFkHhp8-[P[]/ IZ0l!`{2khe6t8:{mQ=q(i ZF*,_l< <$ &Il´瀡7k mpoo_ՃSVj{͚CgZ:J4?o lOsmLAMD:uhS: ;yIeKP_1 /^hF^+/^Ec@4NoL"K/Q`2(y2i.H KdD5A\3x=Bк_HH_j8*݋V~je~}J>:9k-^R b\,p4%+{PvD{0%rsTu812L$LPxV&<˓Z\V:i4U) Z]dRX8NjM&yBWw7:'s<>wύNKmrfΐMa3GYe5'C) nGk-h_M>iCJon-p(ɷ-1ě~MRךFO&V8,Nd.cWDZ*iPQ"fr4l$teƯZoLui2݁TҰa­ӆMfIjsmC{eԼ)rK\w&pѣ,=25Gޕ,bCq @ JQ%["'NM#liw^_C]g",}GtK*:d-ݡ={"7G_JX-xz`8|=_VO}Ut9|o?Vg]ʧy'xΥ[]{/qwWq ?< 7ީo^+<7K_~x\9ζ_czHmu PH,[>IVlT\ߣKO/vOsekϐVW0^i@5AV? N8^i6zpnK&>JMo?lf(jzW{/'A4Nv2^칶Ҵ.%,3t%SE? cͤG$W 2|eyE{~Њ?\`L@лܸE":/qkEMpnT2X~GFxd?> r}"&Uaug,54 iH:F \U%YձO%mA!$]WJJ EB4 2`^R:^2<|ԏI> dB5Cq!+-`tʒyx{[6!T"F {Ҍ!ƶu O䄭tVf tP%] +}Fb3n5FϞmIa(pQ M3,x J, H4L +]e| Pk$)BF^erK:D[p8b$]l B|*׳>`qx\pѢBStIt\*Ix''֓phfV?T8wn7?ZBbBZ 8%κfpPz1;2'X׮6Uyn=dǥ?` ׈Y!*6xvK V j~z鋋ōAE#-$N)M4y+X҂ԴRIxp.3F썋݂UvUݱ`/u CXl_YeGHz]Jb}%E.o*z_>&\L4B҅m>(3TJ?o/28jሑtLTGouA}o{{^HpfVy euyO $[gL-Ȱ#h ډ\M1c =1=(UIpICvuJoLOOXL9(FHdht|/w8ѥv!Ե22}_?2-QJ[jGH~Q1sUYYش9i4a,~ '&k*:uAX#Dͦ тFL/<@!sR<51mclLSi nq 6'$7D3+5hZg@h 4D؎lxUjo t<>T4@ѡ` iwD'FhB4m+H!N¶:!zpu7!9}zprR#BԠR>KÝ>أji6EyKI78Ş/xŔ5̓T'F̀KìG&Gb>\qYi?&k2if= 4&R2&tàMj7?GT ,{yd@a\yw [vI~V8oO;iI}pn)3kyTME$Zr"Eh?i.?0O/l=GѴs662U +tJic/L45d`ʫuˇt n'zxsYHN{11%|oXZLN"$ضyx18 f8r=?'܌QbVˣJW'qWӘPcX\D78=Og[V Dl FE)G'nȒpr1IEy]Z+ʢn#![7 q{CqFXU2xa}/O u48t1*G+޳Cx'G{U9052%TTYcWhh 3#ǸV)蔝S~Vxkƻze$ fpcTq.KG_0\ILϨn6 ~^=Ìiۧhdгn@/O$Y PZe\`΃E{60rIdw(VN[{Z* bbZj0ᵣ#Çy{nK"L[5cv _8M \=]{\/.5Mz>%-g-mu&Na L1KtǬN_ǘ;Q.@DW݆Yv՝-"4chB.8oGmg~nE5xhf&vI@4~VcRQ2 $ "\kOct]t"}E3H :0ZpT̆`fd˸!Ūm﨓yiQEQinJA<ٵ۵KlEOL_FϣϨn1kOum?֠Rog2L|2#>'g:k=kD*YB{:@ Γ!V!I.S?/ոϤDyc??Šq=i0E{#qp*1\tV&AG_guMj#x! |a!rpsvNθM^H0IU|}-+J{T ';:Җ٬[kpULnH#i߫ة9׏_I6:Ap DVXw]g1bN'Iw :9xȞrAt5CU謰6W3!̫aiiȑU[5mm[KxUisxH!쪃@BSUWaeMuJ`ʞ1UſiaVsY9OxLs:)$]=徠bO'W</ޔJÉib  $80]gQh⸕$AZ8aô_JnN$\KB  $IrO(}h%eWpI+j?xꖾimXpe(R'w yz 5(p#iӃ$Fz=% t417ڞM~.s~mݧl~J;U? xRkύ9e6Ge)gA+|TYQԻv]߼ۖ?==|~$i.wM3뭹n4K = ^efGp60{$+I; VW0 L@eC`>Pà0ʚ" ̔v$[p" -#X.`}Dr =ճ$=*Cqx4, ȎL%c0)3=z f ?Ӈ٥|`\ϽLN/<#FC~_aoV%ؤ8\j#ޟ8:( 2:(Lx2€pp-K s=WSUGw:W`ʂ%/AMцv|<+'AcCgI6U`}Em zZbb!=bc~mh0 5ĞށR4G33j.B8FsEOaXo~%J@vFR%JV-%{,OHנfo.5^hqD8@:z1EnW+&C>|,G`%uBY:WG>T6͎]׆Y~URZ7lq3S {hk8 gl'tK{TR)rF DžD XpMA(;T uЍkUM;(IDmrPg(`H]p301`er9 ̆ncvn& 3>AxuI3IUC\ "/ -ʱw''$=B5웆횡}[{;tr_]ĩM,!] %2 D;Y~f 42jDQ]²M\f$l VdQ/m ߽9tsZ:0: 5#>{oj|p޻$^@^erm:"ځ:5>{R;7#L`:7  ޙY ẀZPV-KL9J)pVD1A7*џf"OQgo}m-ԛH&zyw)6#R *N7M`D8SY ;$;ʤf~L(KN:%*4]r?tZlEU-nPJ !n-nФSŒ#%%K+tNPK] G7ǷǨvRXl Kk*tG`Pw]7 OVvk)%S k"fF$,[`|. Zh7e5>A6!ΎX%Ҏ#'yCHzƊӱۃ^Ch:9ĨjNs.:kB1a\qĎ`1q#`Zz; )qY\ڽk?E8 zWYć^ͯi<ՇO^I::"R'rmq Y8V|FiMCfGNX݃6?S'J ejۻ ]yl,jXKxsBv}5,DհX߬I5WmT,'RaeWQE;Vj%>M56a&0⻼>ѹ`[pz isM`q*x.bS|)̻o֢3V@.E+ŁZ8[/p7exs6B"RMIF] 7W'덴cNKM3hFC1/ sixE_~/rCs zMȋK"641ejgxn6z7?_t`2Z{rT7I}FfhWPE6sz^W46_mN+S3/}`Izf9gF@ɚ} ,}R3u)W;e'>y"WāSg4Œ%r'зZxޙZnC DԦ;p*Jv+Oaav%c/[wc0? 7M0/{P\SSvsMBsNֻ_tʳw@nwr"+_J^k0$&]OKp$S;$QfC%ߤ~u lRy0Z=u,apO~5(u/xubּyLD&II$8"/X2pED"[_կ/hΊrolQ^8\ u` 7`Mb%םFQ֝ Xdf!]+ '085BQ_DoYb$6-m;jm;8g;(x2.Pzkp~.Y@xEUU}_~};hɛAyVN9B__fCp̟PnH1W1s n^* W@zߵЛz4|xyhG3jk"Tܚ(tMA XĹ3^ZĝۈskпOjyQH*^Q'2+RS5ZQv <| 8[^{'; OØמA3LT4@6 W@ Jȁ)a?I|ʁwhFۋz)j*wecUF{/x@a_^}ĸ m{Af}s575_6-`l.t˯VDjvlh G[ylæ\޾9/&'S}Q=<$@&T=~tz%}jrU%HlHL<^r^>:MzSNYq>!Ƙ4bajg?DZYTI){o B|6>7`?5Cg&ʏ!W&! +ƣ\е Q/}x*}#4kZg+HX0 J\4_mb\at6 *YRQA: pqlyrx1s ߼n=m4ԁWNIiń쐔VLHi-n u7H0ף&l IF3 @Rk@89N! KtA6טGgKJiz5AQbzP2i$dtH[}XL㧖ʵyNrK][8dX_}0AIx)8ȚȱӶv0byIrML FmB7kZe(ysyνH%1q()*B,\H#I癢b葌X]}5T9z_֡K}eOIª{?%ÅYmargG4 V'sdArOKioۆ*r΅kIt li2#X46JHc}}2 KebX5l}%< 0!O0EscgCZI6G!VL)D MLDSm421(B0\t NXO$R%6=~3Ѣk0Gd!gK1+ʫ1V )^I!Z \ ôJFq?5c{nEUm.9XSG#~h) ̑'5[I~':x>=|n)Ɣyc 0fx(@L "7- PnDm~n^E$wP`(y$Tp jF;'NY&cD KMS F9a5؜3jFS_%v \Qӹ `!:۵ZTI$ -/bF14L9fYEZ$O)GvtL}3B4x4%fsf2srDmP#X&Q6*XdD$R:w\FDud7P4亨$:1c01Q MLFeTHjSS-){tx,e"BK]T*DY. %93 N];IDMDUU4K;8K))` k!I$tLY&H(:DU.b"urM>B^#o:/ws6 B~Νfw܎}F>ErPO 7(p#z J>%r)'a,z(X%B.c^Jvp@ukvL$Jy")*u0%(I#FR!uSK?0oKJp)!/c4l<>qҊcnWRbLDKeT-𾈖D;n{P5F ~ |,VP*mjO0-۵=Zx^D0[fd>CZ (1Ѐ.YI8<[ys\w ]5l=\`/tge6T)oŗ)߁˓}\}cQfح7+*Rn"u;G@l'mgBY[Rl ^֝cr7ԧ#,`$` o Cۯ};UUA09s[k[kKn DaqI (f8؍ºLtuBL05XKM_Y!6cIX#\6+ꕴYEꏴEڌJȣ6% t ;=)@Zͫr㞾dMGԉR8]+6S0}5T3 Xa6v01;kLKiܮwy%6K-:Q0f\{޲0ww=,Z ^[zϯMoUߗ7 /AHHY͘3[+\p͍H<-,h'g8)(Qn<[x!8Qʪ .ƴR#nc=] ǰZh7ԛ'vd(4X+-+8AKfǫ|8 b"Iln. CkWo7*[2c_O.`ˆNNO`ުc1-椨U7=K_BPҰ\;4HBnK} ߌ?d".m0XiyQ7W>eui?g~:an(mc8V&(|=gհar5QMp6 G/miQuMRN'jLl:8A*.rpꦇb|Zu\4/Mƞ>u k/7un9x-! RUp3+%)<ӭot#sܓuN#^rƟen\l:{^VFN@N@wg_xYeaLWCha8\Qb'6YK</IrPFw3`_P"^=% I?͏mF.Ms2&Ů0 jwVěGXY-090#nq+&3"/̈crK9?/"'d.AŃ֚r` 6s1Pi0WG%sg|Ob9U`-KԺ4`.t=X{Ԅs%ء \0yg_dV9e؋VS"+ZrLZX 붍$yZU(nw,ò?[u KʗU`m1! %Zɛvze$.xYdSFRfLfiΦI- DZ5V~o1]F,gP/,YM΋D)<I' ɷJ_^&L+v̂oO[//)C@x9Y#]V.trǞjz8/%Dя3//%zuO}kԪ04vXsNZQ)@{{Z4}iFЃj0!xW*ѩ'm1/v{%PT2u[+Uk|MgUT?*ڂ^~1/IiW 9y^Ľ'U nC΄q#>nK^a uAìbX'`|1h2cyRચ%e(D]c/Зx!?μ́^7\bKC`kuD([xG<&eFW\͛4b8{*)>^[6˷ lԟua l`@@jf3#搈z <ʦ5mLW7 wP<ڤH fD*!/+nIVQ;<7* '?V.v\wp VPΥ`c$x%oH 6S])c&ol>OC7J!ȁX - |K: 4Mox 5N7(IL]Y<,b@|4!* *4h:G# &xdl,Є&\:8&[#JibI">xa x_p+ .)Uv59LӟD7@4alW`˂Uhvwot@(O- ptZdS{C? s;exwz`/'ǧ|[vYEEW^RNnqfުe,lhpryȩ8+V`V tdn`B}}v,b4umKy>KONk?OYO$X{q7'p BZ_, A9aj6\dʕφ 0p@|SLló{Lҽ v;jAV?тz}g]^v%@W:ݯ1~Wl#.p/E˖E2y)`ubW4-k/)!\gW7o |͊WŋoR^֞O-/c7zKuWe)?»XpP;C 9x?wX^Dz KS:|=myd-iE~'o~l s26+ Yb5mւc>s!8vVK69%>t[*:adH}GEꋘX?P(,δb_فwj-1ߑSʋjkC2׻KLC:䬌Ml ڐmg2bMpޫ;rkPMXҖV'=*H-QL|kF YՋ sg/9*R|8ÇetwP<u(X;+gz_)a ]ضf׆33cLݲURYkL):R*%UO°[ ^A2`E A1Bݟ_ϪUM5ߎ-0O?*fJTBS_j8o Wje:o#)+lq£dЯlv~)Gs@W&$a]%f^d_.{si"ej=t^?L*m˫n_+Ut˖^n2Pe$kիz 4I黯8̃Ha-Ӛy&`bIdEP\aX{wQKoDASv;iRjS|MW}P}>_Qʲq4a}a<9oޅMƾ4򝒏warutI1`砉j@/Z/?=LR S_&O&OvFd\ +\hƪ4,WaR*-mי_}6WY75\RWa6wyMtɸa9b0އ?WyB M?f0fa(Uݽkh8Un;Y0rZP36!6Ec^[P N;WS} 8]ߺ21;T cZUڈm#Tyig7PfVU|^{@5AƵcC܀[p@l_b۸i|2=6~p]\;>Sa|4 S x*7& 8FΣGxv#d#AƱO&Dϸ)&0.`7QDZGՎlZ2a}Ջ4KB7mGnvuA ߷NadpJ*a҃^TȲ 2JZicQ$yb$kKrɛȠ'$UJT6MQ.ъ;"XA2v>/srAkl YsLKZiPEJqAVc+#5SWX(9Xϲ˧D~)ўpf`"p*h4T8 f"FERBz1aI}67R#]?|7)"(QF =S.kOE18ZXY vөwh2O MʹSdT>\<3wYԽacOsf bJ,##X(jMZYPN(murdMy.?nXn+ޭFu.L8u `zg p=zD$4IdJ?1v| ~~(^e?HNowoWrg$Z3-.Pa pB:\p3|q6c$!&=bCu bl_"CȉG4<c]uw 6Dz)ӉtD!)+:ǜ\~nm/g8Czɕ0:8&gskH`1kV2iŢqm/ B[V\|Jhc"%jf~) 5#3N. 1cir rCn!`mp9P-&p66ې`m0!`mE#b g@ /ܜڷ6`_[Kڋ3,Iھٽ)$O$Tn5D*[ UgF+\B GD>08 0(\}S뿃gI$QM]v3Xt#|bvfR#s?5PݽǵkYnjъaVon o7Bp1\%o0\. dK2դBd6\. xm$3\.|G1R#m9T0*o y*7b~Ӗ$ؿF^Pg{Xk&27@n hˠatDȥ3~_}q_lȧL,!W͞U#o7]@P!:1@OE@3i:,awz5UͿ_~4׾~G4P֒+ug>y8|c@M'vW7NXi_M22ۤ/E(ZIA)!MF<+y^ڻ(m*(hN) F_-$kN_&k鍵v4}*Opn6 dl7:J $@Pn02Ŕh%s6Gޑ~+rTĘ*9`HIEz?Zwݜ w.}K,*&>˕Rfa8 ;*Bq&cю)`uyWV${"?-:`ms' (҂(Edlj995 pl"wcClυN!A™#qc9jic:,#rzc9p/{X1pq:gG [V`ɑhh0%s*ip<+%Spn/ٲ꜁嘡)HY#*@"!]%j(4%JyWfz2ѽ._DohHTi$*4"*ZڨwFEޑSn4MדpgSl G筳XP '@Oȏf7BYjFP (ҘU{m(4sfwd"=T )Q!x$ Pfl*ڑCޕ=býaZ[KKlG)F}a?Sz,?pFT ^ xsՈgQ#9]xsrQ\PD8LR0S.Xn:(&Ҭ9f17 ֺw#sW<3"[Dž)H#b@ĈBD1ÿ}(:Qi:EuFd EM.% ^0 P(PB |]QEޑd:aí.8"n;,Ϋ ڿ5u6v6Vv⶙cWGMb*"qRe"-q 'pKb%Ps>4UK_qF{+_qq.dq;RHvm8ŕ8mB[q+a<?(N=Yďk]8pU]:~ht.~^̌W;3me;JNZƞ[?ݪ7gf'G+,yŗ9GCܪ؈CUPZ=Yďi[8n6n:ˈ[O\>lOgX8mX^=YĹ0,H* Aܦ\njvV1m&p}ɵ!'V=.ll\Ϩy>SCbV|{\̹_Gݭ.zMK| \8R[L̢<c5qD9Ω<槽r?{?!5hQ%Q Bl TV_8QsJ7؜Gޕ=@Aarn_p,xw'S[gZY%>!z̞7zIçLA? Jʧ/ܕ4]A]*jČ,򎼕LZq)VU!IHJ SWFÞQG3=+˩ ʴG"0J%e)Sܾ)LoEޕ= T{d 'qRu1[0E4x4Pg!nz9R"#S%c:C-0]rJLXmdaeI6lb &U[[M3J6Y]yiz`bLo !F$%5U)4#ewdzʀ/ܬ{4 70  8HdHD#mr4]A˭AUGl%|7uɇ-xt?zRힶFl4=Zz: GK /#"B =Nzƒ9u4XGޕ'L{L'k\RŭLyFE+tk$+BTm@jpg%_ ˠv(¢چQGӆEޕ=>dˠt(LOѝ&/Т F% rG]E{ěm](yAYmHU(BBXrU T p AYmrJ6xGޑ>oi{SץQ{Ze)" y"#c3z#RNCHN:Eޕ=-IK7{̍t'"V&]ި~3.l͸,t,=t,xK W}>%.'ȺˋPT'&n.ʌ+Uv0OQCzP: }b3e/H>D3ʩU'eVKY9}Vh_e9t0J:"Gh]w?fae7I%NKnӀZڨeLSE˿$IEdY핽%Y9 `U;,^=uc(?]:&dnz*,(GA؈mnFEwRw{^U>&~M $@[,)mV5(-qMT2C5Zmӏ`4.C#~0|Bw QyW-:e+Gfˎp@;20XeobLOZPp?.bt%Ԅ`/Ơƒԓ;h?ƛ|Ӎ55~D1E;׏#%5W} x>(a6XzAœ\yDyI%ּB sO}FQU3L/d8y۸D0Ҿґ. MR&I;)E>Tiĭ5㚑2 L5U`.9vҕN]=[cjװ ;7gd#aQ];3ob,kulC\DFxYn%(bJ )] V2x0<5V7fw&ΑgF͔yCc%<:,#FZUFMӸѬ5E>TRMGhUmiH0.oI2&({ځ^;PCU>ae,"8sI26'mA\iK\ *TN:d4I$[P g5佢?ƛvĢľg*izUӥu^L;N~Ѐ!؀C1?oiDtpyWQ'>Z:b; z@.V /(S<ljc)9T-KE/3yrUu{@czϓ|tQfE^(*lAIpB:8L a8NbaA): PvS{mv oC#* ҝs0J%Q⮡<{DeΑX!Npp b;PNuo2fTˍ=t*܎wDYcSvpFXFQQ3Xi:3틕`FG߹ 4k.*ry`p&Q~wn Ͻ:¼Vڃճ'5ov z\mnj[͋õ&:T䣷؃4kW7?{:s ~}CԶD/ ֺ\J~c|oPy²~&^˚5Ui?8N -[?ER3Q~ ^/}V6na~wdsth~8̕[lCZf[sO=Ϋrz5ىK$pvk[?gۧCv=x='wϿ۴Ŷcછy_OugJopkbT+/;堗~??o],O &w?<_mk*o!~qz`tYp*q7_%[,"/Ԭ~w16F6y],G.WMUW3Q]Y-{,kFbVSҫ,l~M(g3RSRkyDcBL/S_3W8mFbv Z@b+foqWľCkmɿgsTBq7KG7%*NEpI8 J &ݭaKv|1[,!X˂!Ľ<jn`ccc%~I{hӜziJJˍ`J*Da(-tn61;jSeb^ m%+yǒ 1=ƛ^T {ir"01le32J:d>" G#$r8XPlz酞 TADƺ(J-샆th <(*L'SID6F) xaɈ4?JܟTR`njcPƪ[c TBLmoǂx*XӨ)߄Mj?=PM:d4F4GۈFT9>>\U萑0fjdq^{Kff<5,|8JxūAv^}7bO}CrBNn)c۟ 1F<xdd0YKչfNÍ'G uMFvF|nń.ap!#}Peo,T%OuFMJY8ó:ѰxF5{pNISO]gݨ g6߫/uG L6r+pkp20 Ϝ?502*hG>T#]ۭRnZ۳ԩ]tzp{pE M<&WX0c>. QU11Y=fx,W%,$ڈQXꩣ)fD&QCU֑JBv{y217PMض`b+nb`Zthv6Ά"u1~Qow܂t{?/"fFd#f";ڸh]9 ǯc熑 fm]ep[ka~GqRZF"+C` U%mɡȇYʤ],3;& RM D@z<ӳ#].%fYg9$gVqKrʾCCTј;}e3N|ȧ§ŽiZ-/QCE^p(;ҽ-6uE|- %<ʂ:p(ԷC!CO˄JKm5Ew]Fc<-IeO*T%T(J|9[(`K,_WZ#֫zSw ]BӶ5'ircJm)tVF)]&{)(Uӥi|,7>藈 WY3f~&Ź4IFKdX.RJ!rfC3anl@5[:9vŭ%6dW\~B5I$бsF}L'!/ H:+ӧEMm|RG,3z>y2ϙ/$(!.ugG mtI(E1X8 eɬJJiuW:K4/sS{[DUOu|`kln'h[O~6tSs-3U FmI2=L)}a* Nn'V⦶U"T8YN|]f]*LǴG QBTA:9_p8̈́>鐑.evDD ܨ:=S>ݵd!# SPT.F3G;;y[βtCƬ zeaX%t5sS |Ry%Z^L/V>/ʨx;acy,y~7l_oF_;ds_junZ% ??Z{4>HBӶ wdUIqK Wl-dE%2/81Y,`W0TQ|c{>i!6*5sGer%pXM}"lh+̤FuP| nӉJORnb_S5_IgrmOPBNjVO@EFO88"ԈطQUnrwզz'ӭp(O괇2[)ZZ3 æh79ͩ$`K V5Da(-tnӤfCh}(*76dZâ+6E<=׹|?g.|oUUMS Cqa&!8th <(*& !mj$%k}}X?X402NNPMѣ 2%pԅSq%eӥc/<ڛ"|GbM-'~aBsfj//;\ϗu (؋`^Y喔+T WZQ 8&/J|uˊDK4Vv5UW#1*mLIJ ZDs5uVХQj;Q1R8B!_:hd4+z t=H#i1n=nMnn,(v3}?Reٖ-ҽs]fQUOh([d4IgRD9C w23hw}T.y>uc!\Yҋ"H(q?~)`oW]}zLQ--+{s \98w~ˆ[t:O?#q(9i16 uڅ4^r`)Ua4qF $}g=r%l$@et:fe\:?KgNiI_zs$?kr6"SɩЬSyu>?rm1^!*`樻Wa2HoňA ~ :X2RW$K ([+TQ<;w_ \’TYb1Kkܑ8(E,,Лuᾯ^)Sq34^s5,bi:P!24k:wuD9FDSlEZNV24?8Z?>u4c;Ӣع(|qx'O9?:gSrʱ)ʒb 6đp/0peoϞOv^ }ɪw*kv94H{Нm#柟g\#ꚹφ)ar|_ <7%cn2Qa]0zn rmaG:7p- &$'p^N;21TЏNCvrl r<-caMDž?8n vAVo Gy#my*sk"B&0Afq{nh;m\Â$mDsՈj.SY#JvV(qJ^ @4Ȫ-@_ 暾`BCŮUĎt&^_lz( Z=@km(fi,k c0%Ə%RQ\R ^u][. <2tBV,!ۮxÑ\щqȋJsK2}W W)J!۸.Hm/A.n!Am91nO<1DwGXb[lE樔͝r& wada `nM|_TM3^V7i7 o`!`XmQ+=j((ވ)VL!CE|%,dO W}덱5!~5x?%C gE$*.9Eyߓ\Ip@]_NS0T۸eC xhlfrooP"3\Agh!i-禚ٺpw"(޳VSv=yU {S;pDc8Bs 5࢙;W.Mb}CXEOgKO!"] 9<7o!PR,Qq zm}06QqT\z/[Whp|7b~fMwqiF&<rSO\ٽ!\ $5'I1_XdT^?;,"V0;1P8/Kм dݭ9Ô 2u] 2};hmrm?ؘB֭)qqp'љbTZZ :z~6ᦦuYŸPQ*<, XYIƜ{|Y֍&6q7 P;_:E05giBrSd Ҽ?ApDb2qSP' VM´WZG9o o >b4)ܐTL#đ,] eX@" X~m *zPYTzR\S+ڿpe,bbAwZ)хn1w_#i@d,?7*/w̎>05* ,q  5xJ gD jf ZQɑB-\ p"jm\elǴI9Rr$/aEBi i7z-1vpwЫYpHۃH=wMB-쇎^׿kpa $Y\^TbR(UNa'NUy]7G/G$ta0d-A2;Gݾ2"OpQVF{+1"MKzZczlQSiޫYf"tӘW5Ne"M) M":ADSU4,A> lӓCɤ7[9ۗv!8~7{\ëYO~8󋉣s߂A |=b!s,ԚKK"1Uo?I!M>_ϟ|tz`dbtWl OGDiʷ1p%N/'mţ3%-nPSiȃҁ^NΥ)ɰl`*˩$zC6/ڝjV\YG;3wL)L98,I&׺ִὑA=CqP#Bm;0f^N SSNPOJ , >)Mc b Lru~KIPҶˌLIWE٘iɠ6a8'yp&LVCs EP:jTptɒIG#&G't۪c<*@!(}MQ0)j8 ZWF".׷ۥY60&}[!ĈbG`0r j,f Fj~owC p&ѭ NTnq·ISbVYblIM}{v?c LrS){ 1,RDuzժ6Q窔hgTռQ&fǐrG Cw&,qi%ި (D .u ~x&RiYJ:31$S>Y7(i%zeܭtU`jݑ)l AL;& 1"Ѩ Ja0Q\,cX8$F5aӿzu^Ɣ…J3TIfIIJzL>}S5{s۲&m!р8 $L) L,\clTH= Z2wp L='ʋsdv6T0-xP@Idؙ < ޥیpLΞ8xmSczU!xx`9K3سG e8d猘N]S'>uzz^,P-P *8$,2&л_6:U$#bףFzOޗT̨sJ۽78L2툗eYMr*Lf^ L * ڎ&Tj " 70TLS"AXa# Q]C^:'ׇ70'3pF MOWC&ie3AwD 8Ϯ N:Mj׈⳧'NƣgouN_y ojp Fdo^<Oʷ%v 8gr=mD:Alzfm b7pvܒ IBfW`lpSgj S eX#qICrV˦ "安z^6@ջ|`$GK3JU)EuEM)\ ;&8\۶da-rcg@J4fMNP7:c?73gO^ N9}%q5e<{M`H<*?>>CVч 'N :8q8Su<Ѐ`wt툔JI&h;kN'N:S´6cL 4#0qON5mX=.8q4F8ng'J{уBӁ$S />D^T.dnLJ@\\>'8&v8tdbۇ 0Y&пF=+4qdڎ)w L mGj5پpwNu%R(!zg%AB:/$%8%Npo`ƸgDy#Dq s1BHChqt8$׽J\冂+Obdۇ 0Yk>LPm?$4qk ^sg~~l\^/ XZTSI`ׄm  7=Ndsݒd@q\AC7k bd6%3 .G@7O鼊 |U̩/N$IxLyO\R_ T1Us;t]/ڃkѲ <ȶ \iMdXƱ3ȵՌWhFZmŝWmyLܕ۶ _A0Զ۸ms*At뛆E ?[_{|L3OAMOO.r]E޼۾̆CEUD4GA#zo(_Q06Q"QdŖg w0d,͢9#s zc?MtKqRWk W\CjkLX*L3uV 6M>,t` =َ*GeՎ{c_䇞?Cϴ[XJE F $$DRe,d!'~@1_c%z} 4YW`9h$(֫\xӍUӠqm%2i܅Vz5 N nB-" {9x9eպ~_fQϫ.eOlMOpfkS}~IIpR=s;o1qx^UA UFxܥ%!i$={ꢕ$oUQAvN6p4 4<d2Nmw )ǻ:j*jhaD:UOqtO)e}?ֿt4[d9[%l.޽t&-(嗡*Wj%9{UzBԙ8qLv#׻<=LF50Kqg;4,eTb44h ț;g"\@#VcHX.g4\gqT'e[΅c"szO}uE\I"Ɍ49]d"&ˀjS9sa|eiڜ4viz.M=Y'`C\0$9޶t܋l^&g$犔2)1SL- ?p@OpS_{?_|7h7zUpjz? &ݾ]wqWuAL?m7yF>Y\xug_ˏ z*+x F(0@ KQyܿt rƪ BU[d}8|gIILH}A)D9!R({|<qo,^ӠFiO},Rgn( M+{M_q~/}롦Q^+6Pׄ C]^>: 4jla;4Ԣb=ZB9}/#^YP80`>~NmۚW[=)e>#kv/ |qgSӫ9P)Iʤ I|#.W=1% 2(c&( s񹱥PL,UL#ʦ{2z?U,#$N]Mzb#J zv+e_q;]%bk9{4tȑ=",@#'dZu@j?g?56s|Mm^F\GvtMDpFr0nKJe5IeNU7ѓטO4Ѱ-7l72B`ǎ4Im;v\Hю~MgNK ˬc['S.3 _Y<+ ˔VUl7sD7[Li]En_mz٘bQ05j\f5tqebJ/_e_pN?~I[[+(dW#]՟S`\Ǖ=n. W \^<+F@؂XpQ '͒X FvY/jf*+"c]qZFb S SMiԩQbPLW)p3|Ͼ' K̘!\Rޯ |ĜRyJ,x&TK*BQLxsvx>Ǖn ɘmz ;WQuRI<2 $1Խ9"mx4.~ X͏1519Q2xKHsňXZP#Q8(Z0{U|D(O\J4hp'^A._, xhjJiu~4m]sL#+[`2/]{661mkn\GVo~U=u%0۽ܗ᪝ZH}4n19Jm˘ çpv v S2hnd{Y=t@ F3Z{n7ѷ7,0R MPTq|x[zpC݊BԧUHZ„*g5ПJP@z{|8ś\tSk^JF$usgav$<8"0!Phbowtl9u%to7.;G*Ttٖ:Ύ1 a1#P*P Q aC RRJ1"[;n:ks%NW U,3~vgr3׆M*qaT5ݝb];ruYmƭq̧[%$( (I:x%jX˹ժ3gDD\|lrTC͠ XQ |=5C1_h^0Z nٍ'q_:LO/gٶ,%{0Ǎߥ>n\?髌Qx :{5:s#S=RɡTeb4LHR,pʨd^c}U ڤYx. YLؼI[:r8*]+xrqZcoU-hX Lz- @1:ꕚSoshr86,UDr5{ukR_3k5Ie=WTtZ[+ ^N ;Rnuv_!)|Mэ:xZ∲߃GTV9C#BF ?}g(Rb;x(`;嬊9G|'QBa~7Y"DD wSC'v/U%QT@*TdZK`uRUsFoyrROK VS(Id^x^QkծM_$XC/sZ3kssI^F$HQɲ*YM^U\8#Tq<-zd&i D6dyMO($*^s|5QE`-73pd-gbETW? ;]2{5֬]^YKUr۳#QYNr.DFԉs-E{6l89G" ( 0<Dٖ@u!5N.\ +2[e޼r}̼+:%dvU^v?rh>{i8o<.p2VG]g껵;}O?VĎ8 J1IF3$hg1?'"An,a3!a (Rkbh!y7b ֜LRH-!x5'"rETh%<`!#:ϾbMY|D:DA /\af3祎YTك@Jr)(D$.8!m/'$sAGpEGAA,CTFz2!2(K" Im (|8Z)>aMkTH豱;j0T(e=&hS 2\* A2É2EYd2Fz WNb:/X'۬<N~)6E=[36HMV&[NuNڎ[?``QVdQrl=: pR0 !9K-BP)V2ayrj^M*QQ)\DD0\0'\\Hs K͜X[ BC{ݍ2}-Lw/,3fk,ss_XZUrg_hQk^>9Ka.F]}QL,R Z1>f V%{#jgwO,;稆^`3gNpnQ^T7EMuld<+RnWEF- 6J{b^z(F kÃ$Jy9"+k<<kΦj=P$Cz8Tgarze;I(u gO[-krzJV[gvzNoؾD8tR9vr#+`#+ `FΟ`uR^i`8i?#\b _l|+a0u.v'߅7FYvsGpE;.\[o4il_f=yM;us fUFWѴ[.9#.E6LhAɳE=`)r[B:1}n6ߕ1Gn-a c栧BCEJ>:s4fEھ76`IS 4XwO  JFXfjac)6˛j>T΋vun-(Ks*ffs #R'F9|8ID<JKA)K'*K7kMIzyo.QkCrv,3}6?K &~ur@Y[ݐ34ݒP B S VjʦT+ܩt+I: %鈚 zAzi$tߕyVz"hDJ,IH@F)ǡ6H3H8&GGv@>jqwuB]lw0ۛ !E/hϪn%ʪ8_S]ru˲-w6.I-fQ5M}B]]1;y +4nK&8?N`4FyՠFa "mo2 vʯMDϻ%HF2ˋEUqo?xĝIF<+]I(A_nOE ~CSP(ˆ>VBS ' XÐ7LD^5*u!.^ڇE*T;#%vZ}jE+R; aBE'ٝAqɚ=ֶ\=`7n2_q_/xdN(}(\,?}19X:~-@ 2 sRN˞]1vYɬʷ= ,hX;_&Y?E) E>S5-%2)yA[c/Q}`ISlW0v?kXə'+g= A !h77>I :X!lZ(K݅.$Q&AV\2q6]vjU Q?Ai}&yyy։u;9Wu%9mas4AX(pS}F%E*h8yt4y_8.X`1@12QHAzG]H'7đ>0"1:čȺL!"UBeP޹=0-- $Kɐ3gɄHɎ7NsvyVxe8Q#cEJD w-D4mjsɠH2'eF-sb-G/3VYެ[&I j0UfK-iR1GQ>_`#UUbӀG%~:.ZDBQ63@ +%ayBDC^h {  jL&P-G,Fa]u.a:CbT `ޡa[j@H|VNmh9j4Цc2|??1fE"C)k5@UvG hv>'C !|䵽z5ܧiN _v0̙ny31P9J:yPM੘eP~6g261+8c51"RzkvWsLOjYKDjv]ں"lYւn21d ϴRo. j! 5Fpʻ~Po/lw犵"k]]C# B 4Pc1\ 1#A(AIQwtLۻn 0&wO#CB"JZPVICqS;>|a-@"0n8zH$ְ-SAD|ֺ?: ۡV%Ghuh|2( :%eJȑjUB÷Lq v$!f\LNߏD*  v oE_> HڠzVJOa¯eG΄DMh e+qǙchL(< b cRK3~t Nk8Ql)@bR=vNC)UW^@fx@ A؊9Z&'{A b7=MhY@`KB4`V!PKn@A՚]1ҽsE<tnEl:PFuʻ\e4TbWG*"e*qU3@xNYʍsxxvqd[KmܰgsɠNpP$$ {` MCbvXF@ӡʹNR1$ H rM4݈-4yu"*h8ZKHq'Z|> t\f&CíM]HÉ.h8D.b6r}E\7*Aw-lղ)vnh)Q}4vĐ7lS29TY@Z9Z&Nջ`8Ye"a2v GY#wOߢ܁M(} anMSwUo9[9 tG ՗ 6G4XƥQLA;W^ju*n%f (CE,tLObHܝ!kBP[dJn:a |h(NY[;+2-Fx"6`Q8f/}vk9}vw잾MVKz>IvMk `g5!sl%8ygLXR~8y&9|.vcW%\sK,?ߍF˖O>ųe~4m>ܪlci+NkR.wF0ϳR>zKQ0dT.k𽆹@b>Bba؀`HXC0*$#P /pQ*%F]fÁ"\R*@kPro՚]qݕ^?qOcQIGKGNy̅Ӊ\jO#J>,LE(~E!緞:kEycqQj5O9iW2J8lNy "$Qe1Q&I6<5Lg@|9B|]]MgNZ"4Tx)7NQdm[j1a1ABxNrP!d8>E8/`na~sr,kv^i=Ɯ^\wO5F&)L05|Q a] X,ψu: 1=k%#̟,޺᧳8, Z[K-N8b=وtBy 72 o=,7|B2jYxio}0~pg)v>~*&e'D]1pqq>l% *%jٙ)r5ֈ<LILwpn$+izWlib¯;?o,_܄QP[q0J̬͂ia*FOwsycGѷ㪙#{gbLUi(l28!ƥ|k?hiEz`(g]mj֪j/V:.:#ybA0TdquNH1^5uwPŞ47~Y 6-.`r\}o^_?\zkYeXY4=z ?}vO6aL:S7%W`+=>rQT.c>3+,|Nr @qcgEYoE8z^~ !/(WG2ߚm(E;h GCPWYK]E^`񿹖ʘ{N1&J8 Tsp %gd'昡ӵTk>1&ôdaCψg36.>5y;˱A_+xN Msl, %|֤b{enr$67[:: ݺYCaE7  ~=zB$l՗%Qj+m:G$}A L㣄Vf(Ǜorڌ7Aob'_C> gBu bl 8 r8'>S#FkuNf!G?f<2r;On/NO+Yy *B ;QW~G+jbW W9*'p_(~m#gJ[!1ai pxQZB#zme0ŵ(c'1J +w^+/r>i'{U,l߾AP"ٷ8iޢTwGHvk?bZnKC8׋ Lwe#&Dp8}dP5)z0 OӫN 5pFaYM*e4Ĥ#"^swBO -5D{J=^%`9&}@h0E4NJJd 10gYJacv[!]΍ +w$zv 7INª28)Wc^JqcH^a=w+&R㊓JExA%+?3If[\Ey=.7L8ec6ŦWHJ#r~3صROl*D,44ևH!,ʤQ;a6 4^\Q Aۼ$ ʌO&-yFK*04+]aOӡ,Dh^M,``G9O,h$ j_' K9IO~L%< `-%WT90o$o^%FHF1 'ߺ˂ 0@I<5Ɇ^{3}14ZZ%m,[2{H]Hl# \kKɍ+N=,w~)Xk`RMtu&]DKu%mKZ,tV*%/vRK'f 33r̵fo򑁈|Y[jYukR )R!8V[c|LGQ g.Zo65.=`|$HJiP,xD]YF2@[ edoˈTX\)ϨClFR[w6St{҂hAn1Hu=?ǓuAy=II?O~zq?E'F, x@#Ն}JqF67QNߝ$gy@P3,4`l7 l2^HesDH4 ҆(J^b!jIFo51SkM3k8X c3G(Иvqv*֋@P+ZMc܍z)I3K`$6#ago eif2f¢&D(/Ei<0`lT97dAyʂj&pJ0R&gZTS3M8 4\JgsvLw)S.HgeZ!Z*- e5q7o=A`wa*X\Ps)I2͜aCLjo5HvkY>ͧ LTRRf+tJvnIy~,)O-%˞;`JQ|$<,VM^1g-v:HS ȃE23BdPxڲiȪ+dY>N`#bmޢhw#iA{ etNhn%2Tx -YI" ћkS?DSw@$Ajrh ͬ$"X BafXPqXFL̔G LE T4}X E죲{jR*x~"zY>7 h!DlFB&cVAQkkjHm/?ԭ"ե (ՠL((u^h­pye`A)<7*z+Pn 'W N'~')Ø/+|zU QI BKRIv9 bo-C2ib`R`ˈ&N J1=3lxݛaNϖ?@0`uEg#-2.-=rp50r`4QhHv5?MLT$kHFr~t~1(!Dau:roAoE+!뙘OcS08o o'ɕ¯Qݿ'?_/>8+̅<Ty[:$JF/jS{H*7j$]Ls=` e8rIFO QMF}6,׼4W?ҊpIryiN,Xv)w4ȹҒ♠IĥKN.uf~~9i)%zЙLl4 o)R)ag2(¨.K/Nf=;}f~/vGj\ d $fl4Y^Y%c+b<Ph?Tu˹S;h>6IpE× s¶3^vAɱ&6T1U|QƬG+.L4Z?Y|~KoGow5릾7Qn{5궠~ikɹtfwyA* }7Lzoow<~oF);ۣZJ{ר= W+XvJc@ &VRnR1vy0g:4j$ [Z{0[%(Q.>{yiaKuض%V"]r(De[-uwvz,-;r\sXR~-+t&y ?R6a=V[grgtae)a䝠 ,C_[EDZ_evV\Zqs|}<|[wHt-`MxR@2D9B޵/A8{_ pabLQ2Iv߯zI͐#ip{9TwOW]Ub?anRIQ/*ถ_&}@_Ja^iGjd~rVk/;^sXesx10(?7 =lXxxbHnR*7K&~<  2FI೛S t3ː2 a s @-gy.H*hi6nRRa7z$ܦ\dctQY R)Ew,v>O*~nvTN I mLeNEʍb Z0dPEꅧDsQ<K)Lj%? 6"<n#ua *lpD'&QznKn`R 0T;ǻˤ7HUOp /&gwh&@ISNj/jA*7qHL/\6 jdvL F;:来[Xe?`GAFhuje ~,;8|2er #ڹgdA/y>ykIQA)3ZXdN|-AZV{S s_i267ۜu/.~)O鿿VQۅBB,ZXJL!K)h3غ 2 ۇ>ë6ޤ Kp3"')RR%8DpCLD,,xGI4 ;PyI]QC5"Ez)l, >LJފ?ͱk"ȗ@fվ^!=wR囔KjʁʽT|r*~N1 u_OffTDu͊T`Y`O)6cw!{t%t-*$uO ==uvgEH{GC+=dؘandPy"P9b$g'MƗL ^266w7`bje ) pd4%"0sĉӠOfAz>loaAnX =u[@I<CTbrH8+(.bJ}6k.%1qV{tHbaJl;WYJbb{t\^ښes=$&kgw"ƶߧpv2UjFw{svG>OK?L@nf6Dy`=(m,grϛyz&=NV@etbrir:@`B<ؚ:o;cƌL"^ˈchnDq|';#IZT91qw]ge2$5AZX,@^vGEgO> ̂HKB9U JkCk^/`鏲^p}%·[C|v`mT0rq$Lۻ/GkQ |nGm7 ,3A [ѵKv5Ԯһ-JԤ֩;ݬOZeP^TT42XaD%$@tY$h) Ӂi]IՆ qȻiXBg `*lu4HP&$쵠>r iCÞXe  #$$8XG!1z ",iP%LR#1sFC:V̚5-kZk @)H zWzD}sgQYVmT ˫kEK*Q|ğ$5T|1-π,; 'X`T9F$㑅H>PUSFDD b FрG!eLDGR_@5ûa9W~*{fu}*;n.4*kc&PfTetۨQ1,%! j@ C+mL}YqR4V/!_/}Sv cͅsB].;pFp.  0+4 hZhj"fU閴ղ=r=e|{Wn luا:LpZ8-8(u:*(G^ A\S)`18Yy[QB},bٿ̿= QX*p'M?5^=<&+!1n).{'0JQVMH/z%a3-]" /,и+IFiO 1Ct]-X r2, c5M?}AH"Aޫa`v|/$=&+g 3-9i +Ak;Bw&sLwD/ rVU{ ZufT+mg̐#xO"&yƠk~jC,J> U !eD͔6 3]T߃=@u06 jfjT;dZRv{p5u_< f:ULpN1Iə*(:[rkv&qQ|=Ь8-q8̻<7WZ&y=eg]̈́̚{Oh_3GoO1LM2ū0݉,aQZz˛SkIi(lS3Džmgr媛 VNL$͢ *- 5Jf N|+*8mv\}1U=Fsk|Ld1՚ \w!<̋a{CNEK\3 S";Ψ]5jnzcunAʾ'B u\~!. y6JjˌBQIh:|.0|&YGr!ɹܱ(i-'R#▭mNFV"j;3^q"Vq? ˲0?i]nV8rPe/yo #:\+{{!tX[=7R&j%f9&r6ȹQja׌, Ƹ#aREb ?Dcە[[J`- }!5تiQS>*"=Ǯo|=^^GA,V\D­[uǨbw̢\uvЄ"Pق$nd6h?VG{5̕!UX^g;cƌL"^ˈiDk45[!-5]fYd>qw;svzjɽ)g=n7^vJ53˭;Ů}{7/LwqH|x{ޡӨ_ q S'w?,c}pE~jVށn<>֥5|xo~lx{Y?s_y5o9[4}E1UQ{eٯ~aۥZ9[ȼLCOɁ8j:"9@t!D#yr=ruk@9Sm1NJn[h/9vR0|%gS4ak]ˮ" &ty^#+[oElJa^iGjd~rVk/;},} *lg|0w塗#A8fug^Rؗ meu |=' `ȥ= c7MLnM6ٿq9{^[V۸]|x:Qwcꧻw+8D8VaxpHz9̋^V0#*(`FBԾDc2Qª+bAr_i@f։rL[Hk%IlŵB9 I9* P0Q\uw^koxlH1@x1=(ݪq?^ £!rR@C,YRu!P{Igf "o>?߼ߤ$؄S(7d3H;x/?x`GcQ>vVTF]}RLݕ\.VW@P& -,W\zr>OIdfnMf ;Sđ?Bo 7w͏ "(5LɫaqGNL{'A+UQd`띖,B2Tœom^̍>T?v#H4N~Awf&8LIڙnR; k4؇mii={st3y9g%h]e嬳tV:& >u;uih0 8HjvqHa򏧃JgP2bdxFw o?/?|?ן?'ЎOŽBzUE^n x2_|jښ[MMk2uo$y9"mrͼOG`tfGn D/N_Fgʚȍ_aŞyF8b0Ulq&9$%}"YͳD-bt( /|HdI4'?ikxW4cG´ߗr<1kcV2Z }<~P@+ /&Sv\X$}Zuy . &z].kb:s)"1y HBd ޛ!Ìr/ @H8@R&JL+!PYEȣq%a$AHҨ I ,_R#MxdjUR[QLRe' &`m_RCi6>|dcYUm_%0bdn'C;fY` b2^^ upT%`VOXj8௮"B)uVd캭cƘ:D9\ ˜&I4W%wQRSoV]ic^| Ʂ~OU>*ubJ݇fz:&A18ˡ &(C 9B`?j~) ނI? P;(:s*05  [ۘ X,J뗖ġ=始9gw.]ؐw&_/|}D>>"b>"MyY`P6U cXZ.JSfm)p8PP~Βs_~u町r9`_O&?fP%"}&R!דqrmgJ{}|+1%h#g꒫wA/,Fil#X y Nyl1dH B)dkR_{8Î; o/\qD#[F7>b!<Zs27R %`I*&J8IJz|,/ )9%y J P)D|rA!B^껫oAn8^uLɳSv!!a:pz>eĺ XTչ/oŝf;"YheE:͝Ⱦ٫jB-XW&͒PNpSc ,( N{) ޅ @+wӴٖWuf||gkI>R~!ўi'߿n75ٚa;|zzHP{ T"Z1ղE32Cv?e5rw27+t@_@7d{P>L_m>Xh>0܆<=/P4މ¥ir9;cԣ5isV5Jk qy`Jb%f\#8 /x Ft/#_ 'lzVb05 YW/^w ߬S@"Xe4s L 9޻]6ZVShk=\:&pvq~V7!|) ԤIZ5^&[ zU 2Ȱi-LkesRV3k[<)U_U_>MBFϣa ^ {fᣲF6qÈfr$--!="ɼq.R؀ 0ہ˾" qe}b3WI;:`rHZlWI`B,YDz"8 QYHj#cA0&YD8V3R؞eִiYӺԴְ"Do_q-}iZ5~!ϗYëdB'05٨`&c;gOee$*T鸁}?} RK*:.jRBe"M(X]0a2Y@s% WVUTV83*8ED0vVumji1+D:7 :5fX#&-Q_S* {7O`0CEt6@nWoOoN~|/U!vCfZb[IR^=`k+jيR?,f.we`F`,lZآ*1F%c3F8x5&aP쓻ö^?,p=d^Ebٲ$-DxTᰵZH[h‘P8*quip5LT!Kc/1T@KaRqہFj~޳ {.. Xs $tDcie}"3kȞ9YvW͓F>`F^hy^M[Ä\5w(Ycj۱OYwakk F=:A;|5@SMfY~\8-:uʡ#'GX{͜`}Z@b/9UsVwm:Īζc̞w\3͎M&"]&#}zulx}ɍ'_Kw׃_3W=sjGꁯGoIGmwRףCu7;4U6 s⋕Ȥh҃c $+@@JA\Uՠ{*HBt u (:⒫={%ҦFً+ htelsE,@z”۝ =.` NBk9Im[We{)F)ye m!tϢD: I&*-T2>H7I{ANAOܛ67o$ō;ߖ vf2 D* ЁB:M ,h*Eu}/Yڥs;1_#ǎp% >TAhqI1""2X*gT&<j0PC)0i3OM8F xb_ Zg 'huǺRe6N3.[.bu$}|dy)!vQ~~m x)$d (D0Kj,ex[Vt.v!yoc4;*ZgǮp@ -O(_+УEΒ| ~3@:"}׎HvĀZGhs%CZX1U]yijDϣ\Sh. H\8H|A ZƶYHs'cow<'&Beqg?&Z^c]x7z(@і\+LKG@Aw@-w+XWJ.ޞ +&R?CZ}|f`N ?eR1rCd]acDOgSŁ!FreXcV 8`#A Mn7{Βk>z3#nf&X2B{37u-zG/JegxVmTZ~9b֚ZEK5bCMN241N_:vRwn:)o^2Þ-`17x,:aW a1c@b;~0"vaN1{["qdc:C4᝘|#0)ɞob#߼Or@b KO["98iD.9 @quT|mEr4%Ӌ@ nn 4-tm] 7B҅s{PU0#|j+D"~UK5y_qUԫoL%NNg1aj,N FdW6C ,r;LP (s)(IyI9~lۄ6-(mķeJ mN:VaB` #4m Nn y߫\>ȷX[=ƴ~ɷrWA[ƂozPݻj?,UfSzUZX]9= 1|uJW7L78quS-g]*Qw<'PkXqI?^v(ypĔ9b[սu%Ulw8VO9k WiP ۬w/v]:(c?~nYؤ7'X&%y՞&F1ٻtBk6]GⱗMr1j~FP6/g1}DY-1.OA|[qlJB8QIR;yMt6F4 " 'i|a[ES knj}SI! ^1=6d mXa|xF3?("+m XJx6WK2loR]Kzqc Th,22I 6*j"AlTNL9-sV̤,}*K1fdzsT{Ԭ[ ֠Lf4m^2Z6W9/_靈/k񮍳۱<^]y^}r|/rP[T7AR}!J(ZRt/۾ ˹ƨB^K0p]!B; \Q-dv$F`mPFoE*bb&dL0!k*RyldƄx/2lY2@r)+xpu vbzg^U}pퟟ%Z?[Q-ͺȊM'o/TK]U. ץ˚RZH)X<hYn:fGuNO\{*vqgoq8~/OY^z~SԎӓ=9})zU`*rX迟8EcyCzJ 5!(sQLmqXV8x40du&tY+~qӈ~{6Y+ojefu4^ڎ& 2`F]s}!;r=o$Fn{CRB.?{df損aZb0]#$mdb, < 4҅=$=lV}-Werߘ"Jo$P+! Cqȧ(F87V9wTS1xSeQg. ߝn;PmL\hHnx~Zу8;i]vkK 1Ke-㑊Yd7o"}7(lGn"h mGi%uN(*LeNÉp<o>Dؿ{}}xk vb%֥H*QSYd%0g2x4w@ƷTUÿ_wݷlfȸk2n;#,}[!FvӬiO nZLT;|Fh-rr6M=Z9FP+}…K 6m *Wu?OPֿܝʼ˟.Zi\~)mόh:[ YV2;gE(@$K%DcE+cT yN7u[ЂFMnAE2.m Ԍ~m E۱ usը.O5&%Qsͭ&WvV Ҝݫ($@kGVIHw.g? F-#묩CpY*|6I+Lm"K9\O^RM|سρ212L(C*P;igg2_Vu-[f:ڲ.HZwzo륟ڰIMwO~n1bfWOIvnޙМ鎲XZw孏s>>jᳮ*ͳ=T}ÓWBL EOX3)'AYqc;h亗eNHHb"%^zK2HA?γ W<Ks]G+|2IP3\޶?w|M)s$*Rd!8b*1}]R ˲Ϩѵ[g$4*Du=HIf})9c@pYo|96xqP:bH$PJd x{mr8h!F8xy~ UM rvWmb(6,r?!jG6xu\sYkeؤljWJb,'`}XˈS&eP*i&#jqRHx0RP|I h@zl@ ,$n~*-JuDupQ*icHZSCɠU9`^a53"2roEf !*WTBmD#6Kt,q3">HQ^{sp h1Fs5I cqzP<ƈ;Ib%&KK) H=HDu?σs :7 (=HKV[ I^ڪ+"UHYy>z;DBVV// YqIt !j౧hFG>.Ģ&FpW7DB R,gRJ9 9K^alS|ʨ]YAK K#*Pfzϣ+(aճ!WI*hc{P{94!/(F`Lf9 C$o$IJ Ev!5UH~`# 2DRk%8|2~ r 2EN؁z=o 5$G%AHYo$qZʹ"n?DB =NH ,T`VI!jj:5-rPQ6x4Vxc22)AшS=??DB \Duv_n_M9]#$qe#CgVNe>UA*E{ae#e=[7o!׺:x mϧdw CwLP |e:SaMqJC =C$To:/F3o. A xku[uh$iwH Cs=~ US9鈢لUOrbȂ. 53BJqXAX'zP< Q&cy*|kJƹt`=~98BfI-aI8S,}H~޳98"<È&` H^#ka4İ(1 ѓ9DBBCDӔ1u@Xtr 5+.jGu>Hm$9^n*@ށ${oc*-| C%@!d͌Ӏ$'YwൕS9+mH~=60`v<"%(&VodIȒX<$VY<"2*§\͊":f~EI-bR0;:#4y#pri>IA \Bmdrȩ3B4",#nI.(|TH@E.3B#o| 4(j/wEN(U%"!uFhbT6)F")ḣ1C,>,B1BgT6 o1ZX%D]x\mA!Weë3B#WW3Rth@)Ӷ@ Oj\+YoO_ʙVy`OU0uM_Sݠ_ܚt?z7^Gh@q"+>w.O7nJnHl( 3&E`T4CrJw@ g/ YK,,1hD|𨨨 ʊ( ,eH뜃aGTM[$* "Gǽ)盧xSTl2q.ߋlV!&_:5Q^ZTxY:e$BX+CdV:DV8X= . \x/4:M}䊩*~DO_my D#f'僐DLIZ$8txP.׌8aq/ H1+_iמE+6]]g"wtѠ*(e@\9"c4gZ(n*XǬbwdhfVݞr /%b!-ɫ)ܷ,^$ԳP].Nk{^Y9qqU{kt4T!uxA &,K R'Ps5$&y$eCjs%x+$΄&~M ('J" F+< >ɀZ`4:{В|z`u_P cJuhF%G!wy?N`h٢4ᣱz 4qZ0-x"d(4r"0|[x:{m:2Xܵ Z1}P;.Ѳn-SVVa.g\3}HZ poU`Ho^XZբ(k;۵k t{+]ƶEZ'BZ fVOϣo ۅ!x8^x\(a+c [KMN~CevTm 5B'3HT)87>R :g+[H<j*ˌayV%&Iݚ7dj 1 Dc" F9ZHΔt̥g kjcduhy#8UތB8BU;.i~PL?Ml |PA@*HmL \२b$ziVRrafwނNl7el|vk:fm7/' 6| g;aj /K PVOZ&h ^F Y4>m~\&OTki&.HcLi ΘYOlrX244ٍQC+Aw\\w\LF^wsl673 ?fѲ\2uAb]+?-ߙG,ӌǸ"-YS|DS)+߬^xG{&+A _>;sRS9lnt9(]o^Pcp[*PHV'S.yg2Գs4Sd~^N9飇2u3ornߍrpbf\Vx(+†+k 1yi3ᙩ1z0c"*I:V{[6[Cf|C8ұ(r Z޽}x$gc==\d`+5,fZm}2ͻ[Uc;h02Ci7ӯlP_;Xˁ\bnя8bc'v/|&_ZD˵Mc?^o`wÜt] ykO9X@dž OzEop}q)#B1 >2%Q7d7NGO2boȧm0jKS$+5٦ >?F "غ+MkY&7詈/{Eu[^͗v=wY+j{4!TUVMWSLy):.piG b*gYCH/:~P!YtE-i-y BEI*H єjp(]c,\y, lb~d:iih&S\ S.'!.Wf&ϒ6 ѠZ? c:u$;h eF8F]zKeYa42 fV,`"ܢ0d4AY&8T5H\e OJo)&Rc^?ο-KT/}^fo/+#(( aWygr>4* "?^F8;kd}tr` C*v1$ZqNyp1x z]m'R@x*/u h6.Xn|@Bvç2&ۿ9eOoJe"ĎKЕJw+nxQ"2CM~"Ժ dkT] `eI{70ʞfww# }zhB\,}S{}3-] u_!U3)[9gUNZ92|@) ){z8g;ٲ9FgedMVZW׋Y`2'>BCg:Y]] LOAE+UO/wBwx~?cOo?P:>>=Z?!& ^T-`Xzkhjjo>57ԍzFJšrżOGQtfaaKcqȧHO>pzw3nϾF1+=|3fR/k],UaAxGCyO@UL|WѸr#0"dٿEmy$QsQz TsZ3]dFHK Ihrqd.$eth\|;Z p:x>(Xɭ O.G()z")wDy=iP*9[{@Cc<h 5 J.zrZ;u^v/=9M _:M'}_vv7.\\QK'U3_xYA*/l[rPE.D/Ii>)FM5T!c.[kir:@8%C } {KDSYL53w+_u)z kGіrT!&@&wIo 4Қq+kn#GoPn$1^OFLL<88%LR((II!|`@L|Hd q28 |ܺLdtPnZ;h9D?IS.Vizj\ Ғ[TUoPS9 $NV9Jd 6HIX[J ͡gZe\Krq=~gGڠ۠8xHa60ceJ8%f&hIL(J>&5+a]IXJ[ gI G"E^^s63b]H26jkKHK'c#3@ QZ @Rpj4gg) ΃**y2TPBZuI>%MKbK i]FET]']2low8Ns\+Ud~=;pe`0F TԕcWBPᄇT`Q=;.#= 'wSۜMms7s?bruF62c M#D 2$ ^ W(B&B ႩDQ -H! X(|9;ghn 緣OLvMu+Ž\MTa ś- ($j``oL5*KmB4[&H] EKf (T[X]wya4NR[ų7+_?wmzbjaey$˒bчrWJ\DA%V'Tߡ+ ͝`*ALT)8G@ъ AKjQ0i HrE"qC:&h`PLÙmvT03M. :`.E4=~Eoy@TRI jWLH[Z^-ǑyV.wQo@gmvK׳]x]x r#i\\ 6w{}=6u3ܺ=.g=fkewW[Vr~mn7-o?}rl#栔,u^AiM.JI- fCr`Bf^SRI-usAN)7 TW`xy5F̞4g4 /s}=xLv巿MgZC#%6~sl^X@ff^dR?8(0KNnNRn*w* vOO]Kq,Zb͚ pN.! $sdv sŲ$EΏ,XPX(CL<S K{vVgmasm'p~;v:..,rV\{Y\jLȲP኎LS#^홄|Ľ%Xq±a{6fYpZFZªPVlׁ 77i i €GZs̬WQƒ4> x0˕:p, Btҷ, Hk*wox==sdxL$0@c9'ce3ʦgm_K/T~p7Fȵf=,oGJj L *H`4%"Zq=WBƽu?r]?"XЀ!wue6bs6Bw,9m&zձQ|_5"HZ#2 /P@L$IbkDv!h3 DL )TpD>Der-O!EXSEiMP1)wS٭.z޿tx %?>J3C7|^@ۼ7/~7(oA!қ=wbHf8((!S2agR+)gv<7np69S͞Ϳ" 䍣(''%| 7B}Z_~I&Ria}hC =۫Kw~{Q"M^D*#GOmԶ\/F380 )Čz04P A'qϹn@Ȳ}>zM=2H22RYZ1l` $q]\3 73m뼪'TiNR`A:ɍQ,],hE&|F3kRDx;)rsRċ. %/.{QKvND5ie9i ◎ to ;rp]3ZvU1^ K\cI)O0]_e^b&o! 9ek16t4T%@<ψ9?ĺ0W2nI|a##ba[}o;Cuʍ*\M}ɱ(0`BPUs`RG vnO kSe3E(3-/<@AD8HCŒ H h)<5E&YfE O#uJoy"q٘T.u!B2\(I`*RJS9Ld t%d_O*3pk#Ci z.}R+D!3A2"r$hBw[&S{ٽڡX=VXߧ_r2S*6[eA@=gqws7o ShaM) 8U TQv*=Eb+eXRv{:5'~ ~=1QQ.mhGnQ8pX4׊. yĦwCs+LLwCꗥFۨ3`7&dzt6F%ÝFcpruP!"e`tMntJ;J۾o"`y0`?G:tJp*Aއ) ZdxP.<crIPDCmXNA E2>4e(ʍf\rKN5NF)rhZ,zNdt\n;<S][o[ɑ+_H}j~plgf 'Cb}H4߷*C[e7jUɮor uOt94(zK nnUwBHΑV!qʐϑ^:$X1@JUR*Ms Z_-k>U WON֞F7#?=\O(w2Og pgY.pN :!&`& 5!E-8,Iw P@WϣFf x"P\X #68XWc(!J Q NMR,eFryQEE5=Y&4 C6lii/'$,!4Q[pNځY; v ]xC~]ݳE*ϝ~K 1 rp*1+!y,Unq׏8 -\p4Ѭ UURň16τ%U&^|@mz :kb"&"+wE OuJTm;Q d Bkqk7<%EB R+Q昏bD.%eK빱1$IUI$%xs24mt6cD- Jb+hU uS#/:am/ Hy|LSPj{ q1:K x B TR$#}`F/K R>@G#v)yy+ҜYAR)n6bS>Rb䖶 /VJlqHں jvCS-}/&vjyvե@h}׹/0:i_2k4**l]azu5i{hC6뭕^׭ͺClYaYݮ{=/o|zg=/\OtsK-ּ o{=oxZΡ .:HsY'6tW<}F>r+3Jsmm.d۹K!=8 Sn4z(K{Uopv❋hj6f2԰C.*'M~䐽ݚEgF#?^g/N)V3i{eӗadY77{ }d_.g==[![8 g+2kaeot .||Y™+80fac;u:~f*LLp G˷SIoP;YnԃTj۰6Opv$[ҶmȸFӘ"IqI-2 E wt9٘R0UGenn96l+{=e:spSFB6h-b&a}4*e4!id`V={eݬNPiIYIN z,Z:0 <^xŁjPԨ8y[rol`@@GC;fcE[IǴPV* Vp]R(SOhm[ۅM/:^Pg B?Cjc% /ZGQ/wJ,Jp*<2&,!|b?i[@wTW2{aTiT2ƴjB;o&u,.r$Z@MV0!J!*/2KIJ^]PPS`Y*X^K $cH@1^i1 )/ B\-ph/he&P_O*8Q:ea PVXe$#1(Z 6mk /.gji=|? 9i8Xɽq7 `0g }\&5c̔_V8V. \x0TGR}TA:~"#GH3}vқX<̺Gm,I4VQ /+¼ci6ԫBB/TiF:xIb$Iˈ\# GK!0F&.T]FzL@7L$CTIc>"bMM6ij@rBZȤ>Q5g?t~&?sRe߉n_LУq`26e''sI4G?pgS;/;=BjG3w텉sǐ̾QQQ@C ;?eŽ.љVSh/\h4Fxq{> D+7ZOHt6Ԟ[9]N|7OkGp4SP)U_ƯbN?Q{鎿ěΥHݢWj)trOߪRDRr#U+rQu71_PO#F1?wniں3i]vcd4a= OdoY풓m+iIƑ\=E0q$arqA-p.f->nf=Zٽ,QY?lM6z>(f%CFNWvuSt&9۟<9~~zw_~OP w`2+)y m ߶ڈfCs ꪧy\IXKn>(lup@v]?||N%<'?i5'g`1~@=NivQowfeςEE f4#̀>J'@'1=$ieb}K0"=_DeW|H5g jƐv vIhUH'aHzjoLU5|?D8 t 1.YI>IъLf֤)n&R"X9iPR*[GT mT{Tcv:1$((+$fD*}n!h#<=#\9ҕ~bܲz[9 C"ev*nj$b,N wWcd2-~;8//6~ozTt%{3YObPhU\Q-(&mWo?^369I\9I\N$q9I\N}!mC*0/zMA`RGTRJmɌ6>)dQGJHvaU(+(k;eеpUeց[?-/g|;㌂.dRbJ]g{hlQRe=Abř:uB7V{0"P@U/@&&ǔ=֜J-ƣ>ȧ?x'1oN0\!$VGrXQB2xy5ibt!b3ü-z{&j` 5V<]Zb7lȾbjjt-irPe% Zmc)Vc1d+&WɕFH2HY H$2%0 ҆(J`iS X K\#>Og7F ]^--UA<5}r> y2!@ GKBz!<>m<>Xxˀ-2T zKYy{)^>G/K}DRicWB&Q1D%BT̎6[O.`-B /e&#g!x0k5f,`ZFL&Z i^6=5y>Y7J;YNv߳{tHgț UW<4ٝi>n쎮[-דvm㖶֩U`Vwzs1a u ez{jG6; ݬ;%,[jݿm{eݝטyvzM&[nϏ\>4?n.x|GބQSdPq.zK*ivrtK{QwIxƤyn!論[cm+zr͋/}qTDĔz_2die8UfK;1Ab"hܰNϓ@ykf+׫n|~M:g6英.M>!K*Ljd<)V2""&ZH0<)c"wv+w+DBNDG>` m0jruA&tڥٜ&vom<TϕaVdY7G7{j33fOOz˗0qլ-v[7(y&p8Se3g™13c1,̡≙­泫&3t+3L+(gdR)JL F1:[k%W3<`OP]Q!J(ng)p  !4: JzVhpf=<z̝v0[B7zf!% 3Bs4m:r$=ZYۏg|Ś5E?|u:xԜ2Kgb2^=^/sO7&d + kctB|85W-TVo3]Rp@d).5Ԇ2xgPPȂ&E;YѰ꒹]5r0Ko$܏堂Y?؋>UBYJ֒iTj Cm=ߜEqPK?E@~R(Stջ6^o#_ڼz}H|YZ9)>J*ɋ4O4+Wş)~u1zgN 4Uŏي  f V8)&irV\֓߂|wB{>&ɶv_Yqc۩Ww 7:+ HʶصZEƙ7PLbѨd鴺lmD$%Lr}2t))G%sz]R૸6㙫u{UCN0֩ GZ}zCz ompY˓!rRO1BNr0H2E=\Om$\.%6!`BE ǨU%e &=<lkYϞަgBCa!_+re"dJPu۔{_ۇؒ4=uչ\yw>N"CEp"٬Rp$ ]Q3G4Җ$Wh>("A548ZZ:(̄$ RrHd 3â\KRAyã]&~e}?N%U0RaZPQ 2P(dReQ$Z=&; _]]RM0D{ t(rdrJ@v>`ՈhfJ"}Č/#'\(L(*Ah2 kM4Qb a o-'].6gS‹oa^pa )%>M]ߪN7zP [(gۼi3,z/I^Y)+x P;X/Eb0"X3$_?y v.#r2E'RP;1UFvxuRANP(T85iattޭӥ-`#C}5>((w S×S*"uGBggS8& 4,N|GԡQimŵ5y:>kW?'?^]?^f7'#nrדq?6{KroWDx2b^ؽhƋKЦ8㷏44 8 aVpI-`.&->cFyT-rӨ֪(r1jNjC#i`̥A0$sW(?T- ߛqӏ@w޾~Wo?}xû^dV'ȃχM@=`7_Z񺡩b =.G%o&,-tp@/~j *.^uWStԚ+8fAo65G TvE=J%E"AL#O>59V틭u4qhp|uO7#x!k1F: l0!A2\HysQ Pzh}^-ZO|z՟0q[__F@׫w*nj4ւ"m}bluZ|0O_xǽk4ޕ \2G"aqBhEȨ;-c{zqQ_FJkaaWX] %VjK&ӫ ^#;Yo)Z/m _C.LG*Caj3jX$"﵌FMFSXHKDou;7+()D#-`r2XyO"4s`QM9ДD]6J#k2$5IAO`FG <* s]`FPdR"6fn چI*w]IA@otggWi΀&uAŭ]SQTgڭ:ސ7o%ZR|B})n@>+)& KdoW12#XP1g&p}h3wݡѢ8"y(7$ne>vTWk]wo~Ӡ4Ke@ZU` OP fh/#`{t$LET}Ȁ5=ˁeE.qVJ @A3 ('@׆܏{iLv!y,310BB23u$&F:łKsg Hc\̙2eN{LN9bGϦi- r.[^ԡ{`[hD9[q0*.#R>&&Cԅ3TYdi v%ԗ+ ϖu.$۽>l(ݧ Hy/G8 Jx8 [85Dz<0bQG"`"?M-a$EV 1O6{Ξ3k\꘢t 4jbwvZKszWaq_kjkUEGN6 8Xgظ3`gz9_!ew!kc 0;O #O$MR,V,IRgVXɬ<"2 ((HEΪNcF14HleF͝QFGRDC (Utdl9;q;^1>ٛկo;!rO0Yg Ug3=C{DZ/O5JqXtLK'FYC*B|@-H)Xo7kn.Bɵ r B($<3@cĜFX$%i%}s;%M8FeMy6{:wy QG3jymHaƅ E@WzLK#X5Cƶ.p8=Vl2ǻL }QicE;˜:f[Je [!Eu.D$ͮx,)&^hf>?KKEҒ)E 0ëw장s.?0Ǫt2kGNXO;}. ;`L"r5ik*{-#\S&!<DŽ u䫙\mn!JTO$ۼΞxW3ئ%qC`H]eU+oԔ}#5e0{ 2E0=rMQJJぴ]b9&d7Չ&/FjZq /Tg(i_ʙ(Oo׾=f֎ Fȳ;UC֎KXO ̓Lb O-*N̿tJ7撎SƝbӖOLE0|m\S2نo&r其Kj nBoPMÐ:GU2F9mѧSE&ۓa{ƈ3ٞ}m#tZ3} qVG-"ҼxvLhbJ4O#ɿ0hFԂ)޵<(Pr-u"_|lDߎ )-ԨNF-ԨPjBZQ 5jF-ԨjB"UQ 5jF-@P+hZѴBZQ 5jF-К)]Q˩BZQ 5jF-ԨPjBZQ 5j,7=KӳWa%xvϲֿd0L+hn2`F2l > m=]_퓗w_uT[9g1\A='pZ3 bNRH<;R'PB abq|@E ,I*txaJm:kAo rdĻm4M}.e+! K8D>K@t$2XFqV]A:Rw0!S&aJ;Y <zqyYW5g))r9cX3 6 5$OrF FXL9<(iT`^&A.]udZU tKeV8%UD#dG(o"XAVW`TǑ+:vI:Ƶw!p"V.zd^!"L2 Ms1Z+:vA:8[R~׺ Nhg8 mp`P  ;dJ*MJ)dJ*J)dJzP# 5zQ.> 6_`ZPHno߯DWl  wx;h@ %+$҂\m0gCOs:=wpF6{6CG C!fH Y@ c.*5X@U얜d"sݥA53'%+HU$㌮5tD~ff|;3JAFtX Bd3Q8cul-y;vr gNz~%{Rםl--C1.tR~(`n24ի<ܬib:j˭eQ ->bXo'/ƺO-+pOGu/|OM.֌޸Zީ, 2˦NXԦm2RY2=5r9+2z-Wٳlg^iK8!J_!)'Zze.)Ho Ō*黫!v%.+ˬy` AHt61);?fs\猜QX+X0J`D$+h :lu>'g1BBpA LH:h QXZD¬ ݒfօ%T8 yc;I S"!&"7s$#uek&NKϐfx؍Aղ$?jd\W6;HD3OrD8:LpZ8尢NGeK!6SNc [76=]7 vX/rd`Y 2@ ſy%*XvƏfcC*IYtbVFXi K1X̞T]8d^- dБ8#DH4 ҆(JiS X K\Ȋ)Pʍ"VFudXDcZ)"VD h̍50FX ZLW;.׏g _bAP)( 凘 MsKW Z1dP͏,j<\xY {͌1{h+┏ϑq!fPpz*pk5yidʳV>[X1c2b=6MVHKDfx9=ul8o;b3{TPu 7f@Ս_uXFml쾄gWjfW ń: JWP*]-Js4;P~oexgf]!dIͫ‡+o>y~yHOSG{%PϦ7|[ #LXhoe~jK}Y_%A^u|s%b7lr ?d? P~7d*"b@tJ2I*pM?|ggԃ'?'Q!0ǘXOb:# 3q$0əa ?QFCx}wH:1ܞRG1]1iU \1GUe& I`*ϐ! !rMtGA֏\ N{d XPhPD :mve^T~[Y"PQle*ʻ<*!OaZo+0EE3VXR?p2G)1k0ڔ@iMIt:@iC' $ 4}{,%F %op?TRa jܰoo7Xٓӗ@&T5v0K?Z]^` ]z;[rӔ^(m/X˱wZ''gn -?0rW>mL]Em9CcTsUaֲ7Eo.Rovuz |Qm\wG,?=6@Ʒ^ZP6RwRy\(zzY;4(fQS-Ϻhh^]bxa؛E։Z7 SH^9N2,6 l od&kcK:zZ*DvB ō, ǰ>H D,kRGCǀ3〭i"bK(F[ :aFhNPls^EBk܉Z{ƧXcҾ?.F/@9 `V-)x*CO*ZRJP_+h\T}2t%)GsUz]Q>'/ ͐qdED F"A27<~F9y44DERO1BNr0H2EҶ6:٬h (WKM뢄c*5 `/<Ggv$-Xŀz~ڱԒ;Wi4O]1hW3MWݞ|ꘖ̙`|UɚP2P6+C,zc]hXDN01}戦e9@xRDukAmï+BKDJNcRɜRap* M%1"Ay9Ѹc-P 'jG7?]e#0-v(ťTTqAƔL*qD+gP`'GHK>$G;^E8fB!8.l5"(Ȧ"Uc_ 1 E%r>M}PD"pMX+$CO'0E61.fcgg^J)w#0: ,}kvA5/xs7iG͗#<GփA_K3{7(E`ˑLY]a@ LA}0ʈ\Mխbll ׫0U[  N[lB\:6ogs)GG;k$(jb`ŧd<6>uKxLL%iF>5ի*2|T!O*]^mOVBV31 ݃ `ԂX~l)=I&PH8bx#֗'2 H\Q,RnGD!bTnZdx13%5k^e{Uٓ?M/o|Zf'cz9ku0loەKxݴuBtG*ƺF2s$7t5sUbh'=sp=nYݣ }"FZ+H%rIf5ƷtMlϊ:ѿwjN ?g FgqpsI_?~O?_O?קϿ8J]@x xx4?N[ -A3tU7|K1n ,绯?I,YaO͙ ,S!дͯ[MX{⼋vV[?! ~+Rt π@םPVC44I~u߇5a"pI (B7A쐕N1 蕤׾mXȼe?Oe)N<Z0s1)'+jNZdvD.gH,Ӗ%zNIzVgz5 \%w]ZZ;]]\68K\sFi$$br<@MH`V\^/\zu9椙G-U ^e"4ϊgfd \ ֦*NJ(\# <{tttGG=:Qztԣh N&Gwx~h8Μ,7>+(քR842hjEMp $8x\>DȨExrNژ(!Wf<72I!P TRWze@$\iKF>xuA9d4/aMh}B+Cj ph |p~JHv;˓\a"&œ hE.$+yo\1r:ET%;yN2؀{mNf^J2 SN``/vڀPݜ@3C'B},LA$LoN$Y#>8]5gl:ca DI ČHhJQfĂnQFdm<8 mOGjODZ hkA2|puG4Jʎi̓:-Iz3Hewm=N ):Ի?ö&W5o"),6NNBRIBDc&@9݉A`4]nhnIE$z?Yێohj w:Q=zTm93*(C(ULj(9z9I'pT&&c![3u]U+ !.&A; /a"T Bj %4h "j.ךnMj9ro˹[ͷS F'ohzM^a*կo{Y<9}< F]Ο詭GsY:ҳFȂixtgUs\̒ =) G 46xεTmXm:{xmE)G(㙲p;,M})mo8ټ _iJdY jQ0H4il4*V8g,KʙLE 5t˖7r=aNhTHe2Yb&L,DC(!tTFҠ]g}=ywc8c*I+]6RT"F "dV&PVzU,n@5x sZi=4)JODz]eIVDU - ]g+fH0`8Op`Ժ}ѧ񨹂/pU/&z $cĆ^Ǧ`'f׽_9TڱCuJ^WfK67N&[Btr'xbwZmڡI"6nz9k3?^vk-~={i)Yt qk#8ZS!#rS{Tޒ艱&Ngɂnfu3qPORyHNGx乵*GkgF5g?3v<7_8vrN10^=<5?p,"c x 4$fdg mVnVq7Nq6Da9ȋ{cTINQ#qoy@i;u,yR;Ьgti, x~z~ u=IndRmzh7+PZ/Yv&;:]ٮ anRˆZ.?v>΋=Nžvϯ N-EXyL7wt5߯GbG7l5^7?np˪/n>-ǵ˰!H[ $(marH^S;eCГS2vOE}V^8l㋋"ܜZ6{fмI%I`A>^ޫwKu0{GכYiCx;.m7WrQ;m!D>NaVdY7=ߜ$p4Qٔr;V F{A\ \QQr -{\/Jw kQ׭3*Y,jc=/"uŤ3{xYWGf)^)G<Ã=j;(5'a ¼$+5d\y]ЃG/NIʭ]y~1-2<˰xNe[6fBTxy)̏Ug+x ؚLhS{-\:6h/=|&Qcn23CmO?EOPi.Nz\5߹]6/P1o w/OޖA &m .;}(I 6:D'WV %:9tsL492hت N^VsP^GrIjRp/FŃ89_Gi@ GTėi|E'b%QeH)C3󘛢SQ(e{>pFBZ~ªQVvd׃5oZ=l{uu˫_w/*$`l1 K%[/[+QJɵ_i !%3Cm'A+B02L*@&궲=V=L Vp^KRљk'OzrF!O92(=Bk`DU.J=a9z{'itWoчTn|7Go ~2%]ˆ&WEA"Jk@k$m_ \.K^tgYhF/]״s$ޏn;eADkM"@!$ZNFpiU !tBY03!JMڰܧAk,m\T NraZ3:2̭6=w-dR> ֕ bCGk[ЧXj{[R_G^xw|-]vjiyiRo!f #S'*ijIFg|.EUނ BFw\khfYsA:$VaUW^epJGsK<1B2RV4ObrOǖp/Ҩh0ޑ2> AOʧ&7( (?C+m+WE˝4mCOҷn`p} YrK⾸}GEmڒ&#s_>'']WIN0]~E7N WAl3ga=?3/]!Y|_R8b p)ӣS8g<$d ( ,[oMNzɻDRkޭo-/p }/3( w˗{7L_:/t;w7rۼN OΏW/PJŒ{vz \kL!`|'\'>O0 ~׻ҧӽňv«U`v12ƜaO?Ɩ˱]q]ЫnAIٵ5 y{Mg몁d][dyRp'2e \?O7tZnuueO )K>C)Co{E]#6oϛjPsYav/?t޼_~*??yzquVG`M`ЀUݫVXߢjn.UW|+)zobjYogr w?}q_'l"^,RtjE/au{l6Yg)ݞ0/k߯ bB +1Y |*;qwItվh=$Mna$1Y&9dch yI}g7> OBg+t!s-4#驽 G:B?IO JnmHBPr)JVd8ύlFzTj4^M*:EvRmN܋섟HSdY޽1s'w.w3]t dW/YZ3 q|Z& p\P83jD1G=zee< +'deV$Yk{V?+;cu d4 Nx@Pf O/ͅ)IE>8cm2hxX"RzDhJ 62xrMy]9u-t@0.m q_?]b``>5}r>:"y)ݻ DT;NXccTl^c ќ9#^nr}ȧ?- Im72 k2Hgb"Zs11SbdL@+CAUuճ|d2؀u[.> 'dSVMQ\ Vk.׭έW,p4 £UF%^]i*hR]Od{߮ME@hܥ_xx4=Qw$aQs" cG\jq{+d}\ @yvO? b迾~©`@S )0|GA) Y P1utRd)s\'XNQT,ktٶIcxfw>d!Y8o.|9slO,dz3AO|*^ů_6?x ^}| ?xV ^~)]\Aesw7wwsw7wwsw7wwsw7wwsw7waUAhژ^ckzY15fF_ԘZockzm7fƬט^ckzY15fƬט^ckzY15fƬט^ckz-~iKS^@ݺnt+4q$L3A%8═Ȥ9`G:57^o]9I"i'Љ)9Xɲ`ji2^ibjIJU'%h@R(s ͐'lc!L#Jq8Zsv5 m~inHUW&'OA_R.  i 3GqUtt'b&@P#&M4I eE?_rQޗ un6w 61Jd9 0(0~ Sr))oM)zyoA1ZϑZ ~Woѿ|_\z!_{<^pQO||cqh"b$t6Ȩhq[XQ())LX`T[(Mh*gmH2%+brXH£hI>^F5vZ9E4;;,㲮)k8͓{9; Nݦ,Z}}~pNN[Em)FԾ[?Zy67QPR)K.tVrZFKLq|e>cp01szR( :fy))*lcL QI֌՚]3nF){хf=u᜸./ZӅ;eJߦ&n\=ir1 sg?{`ez'%uKhx8 s(8ɕQE56IX6ѠMmZٮhWCf}6Y PUY*t`@7Q D&(t$PcF`Mڹ>gj2dϞa*-JRpfҬ@.F$<lLAԐ57MҚ5I{LI{BQJ©i:TN:_I9{1؅*҂da]7XɅh/weDѥѐ|4 QKqU DH cGs٢Zҁ0bT^d-2Q0eŢɨhhFr:J^ _ M?̞yO ;?nP7Hl2+ XԜ%|EYKtoJpLfo*Y ą3' `r9KC!**Kpl[q7pBx4u|{V 3 s],XQ-rpIF^F`4, \aֵcQ?ݵo^m.!G ?j:#PE@ Ó8pa8b$iѡTs]#.*V[x6D 1%) 8*]jZ!Lrt>Ł9xJcFinD-m9%`LĄBsS'G QŲPY#*+7!HÑAcs>:OT%Υ&J˳2s3*S)LF; { |vFR+Lvy4.dD}Ԁٹl!Eedy++=e+-WJ˘1xIJք7GU3exKr1gϢ MHiFێك}oA[K۝A'w q˞iA8Ov*qFH/DFgiN+xeN$*D&"PK<V$,u2-!܁QX9v;SXaGA#Kv9֤eByYQĄ23"hLbyw]Ov%Ջq}[s{zˑO>QGXQ!N\Y4Sv]v9>ݫ?/5e^ϟxs-׋d^vޘ>*f=A5w'>!u쐮ifOܯ+q6U·mpA=A=!bߝqadk2ͤTLFAA hd{v 4 p2YE#H퇁~ӠGnn7TC @288L^޸6H'H@HHrykP#{ms Z݊X3j1ocvu"-fPVCYL`0~fkumHkOHKKi]y)Ƿ^%k$-d)8mb2D$ljb4~JVf=)42I)%GƒfF 2dc@:ocҪ ›[LUQid&€e6jAj!9(e3)R \9P'G7qoY,e-[7>{t |K>]wA  UAZogfxފZ)ʪULR9zF#_gȖ x+(ys m͞-MNR"|l V K( "Pg}&juN("2Nup,}N6sĿ6ys :0X6J8GCHuOn]4K˙6 J5Ƅ kZismX91"jhrEC#h$l8zB2=O>ϱ`ڒz5o11cBG>lrc xi,SQEDPGAKJempaůʩ3k!VC eN3*%꒟<ڱ{nΝx&qC;얢E;Ბ6˦'ER?6\0Z_&79l)tfkeG+$P݋47J|x{EK tW< w7']y9!Ws^笥6/-p8)Ssţr_~r<¤{9ܚm'ۜkyu|y"ˍMO?jlI-LH1&H43IZ,Rt"NE 2?{ƑBq]N$ ,jI)b$%JVڲ8@d-tչ=!^yf_ <=,O?q*iŗ<-p ?dyCk$n;Vyyxr=fY~pxiw-=kݗۿĀe77/2nl=xy;gtXpq#wꡅ#n_nfmlq9ok"Jg:Wo98(rPxOW/_.L#.}5E6UR͇vnDvczsЗ76{>8˽,we 3\=鶅wϛvFt6ӿa`VOl¦|^e'gufn|b6׶[FN{gpLͮM6;{y<️niNFLNX}=T}u`/ͅb /iTԊ8<]Np2+˧*{x|xQ"mbb{WEs AG,Ƥ{5I9XvMr7g'Fu*q⺻޺cN[ԎQ{r?lzEo)M_nϳY o,ξߞuٰ4bڇw#ta5mcmf:E0?k^,K7tv oC 臼{^[\aOϨ1e}>|eUVk-it2-tY[aMqq)Rv^pepq:|nQ-Sz4 娝]2cʡ!}Y0WYXKib .ˬJ/s8|xsJ|$€GeB`+C/'Ǯ7zχ-.u;Ȏ8O뗆~7t)lyx }w~%~lyZ|컟~Wϖko5ñWI';7>/ΖAC-g^ݧ+Sdu{ 4g:Ph}7B_⌑N6yIgJ@Mgtӣ] qq|-|uX@8d/OLc~xPxs߮^+̽돽5BzoB^Zo+O{Wy JeԄ:7K\}ixkם>ľv͏Z,ǥ5U-,D͖@շ`79B4Z絥dQM%Hjg$TZʘm2ʸRQhAZ$5'~lQ'i~*RR(OvJkٵbh:r^|L}֌%)m]3BJᨨf*LF'()SS%*D 1 px x~ 3)e%lFZ+ $eʢrQXW=„[kQ Ki) omx_UdqzX EW JGAE\$ݔ6l&7S੒b5eXF*xd68hj`h6HaJcxJ! V 6=`, ל !$ e6 a#1S;=51 ~)}ŬQ@q|cHQBu"wiprB@}o2( ߹a烉oCouj$\zW sc ZoQ,j784_4hZB{PK()DM+ʫ2 䓢Z0҅qca}y 0|IEHV/V>(2XhGWMƢs*Yȩ ՏDE3ronLj&8x*@&o]{ͭv;:o1K*!+tm e,{ԥt13~R;ȋI4 QB/B @9Bik ) 4eP7 PJNZ%Lň Q u0]BxdWEfRh" V N+]Kd0-@O"}Q:$ }]qcQm\g$1S"IE_(M6ɀCV=zw7Y zZXtO*üO(>&~H4flh'r(Zsl<~'X}tґp֮i&$R:͠A*RI/QR@t52ߤn$,B L辠[ieT`LMiXyynzyu8wχ˴q=Ϲw&hP`aiFh%FBƔ(°i >; ,fgm*5fկFj99%E.Ę4rp1aҷTfU(5nJ4𐗨5ɡM!TsC<ZM~nD;=iPX ]gT*dEheBCJ@R9$TiOW,XoڪכaӳY@|?^ouEXIBi'W Wh#GooP']. U@ SBTo2z#5J}ToFB,ÌW=g U4+~(]IB =4v*hc+~ҠǚA*Ei J>{%tu2 $jd-'k#kDv ʷҔ/o{7SAj+v'Qqp$kN)#t`eSFuL!jIR8IBq}}蹷h&@ٿ _}3[.Nׇ&g;f.1z&|~~䴿P=gGÉgx\!N}o 녋agC߽Xڮ`Wvo W$ɄѾG,ʼnqq X[M 7cCtD|r8ua 6F3M1#|c,nL\4#s~f|#1K9R.rͥt 2ab( r$ MӁ< z]m^ uU90u력>B nZͯHxC֬/u" QBR-q 8mi{jT'zI"SՑ`gO3J6qJi-r,Ǹ3! Xd0P S.\t0Fp[4F*Qo`)Qi]UE 6vI6Ƶw!p"V.zd^!"LKz6)u^.Ɔ RmҴOǗi `{+tKYƴx2-66IKϜf!9>kDil3/"g*+Uٲ[w#| WĪ{=$PU,ΪORUYDG{1ˏDwa)8]4g-F! 0 } xeTMuV4IyAӳXqh OOz}rx5д Q̒ (7=M<\')lsS@ cܫ)'ҐN6JU>> GK&n%}0L)Zh"_DO&<ѺέQ0׋᷇Aɕ԰h5%s]\6 FS^_ٚ/m¼vU!C$8ZCě<txRp#kp l 󑱬1=c6qvoD)g4@_U /k}|LԶ$-]Y.l'\o}W z`caX *P Tu*w8cP::[9 ތ۬*tC'ZK(In oL'Ii@<+'UfM~!M;8ƭn&?6LV "d@jik|@NxGc} ru08a'CrǡdG᬴ lMGbKFbr[XA`Ir\$539f^T}ki]j ;Q7b-]WTwƽNeFl[_w3^B'w Xhs+׽ ~q]F3TYda v+Ro=*J({یtP33t:.f>"8&yiA^2JYeSCzlyqˆ^FcD2Y+냉4eDDL ` Xy$R@32egGW< _evdMyyZ.=󽮇[:_*Sy/ ;='L.UTptP+`/)=6. ̛ `(HEΛMcF1Rʌ ;6`#*2l5`Tѡ3nkq% g86幘{gP)9h#3Eų3E?.?HJkӃN*zGL8B *yoSG:U.T /|C \o09Bie8.N\HFNLGb'jW!P7cvkYyf^YS֙@p=nw:h.FFwŃI@hsթ;5zb.&|t.tTJo?Rh׵{JPrܽMy)vyf}H Mn^7yioݒy֕o/H8B7$Ӣ&,__yQoXFnml~p|;}mg%o/wKRw2N{8Sha.0rxZn3,떌ͩjڢeگO]ޮK4_Mk7i7XHǞtzX{))S"!`%K9 Q Ƒv@V nI_q80Q8I5-L Jlnm(f/ШcC{4Ǩ˱2[l |Y`Wg6(^(6m偩Awr ;i(&^Cjnܵ..d6K6]z^ܨ9$/2zx1%NG XcwщeJ1#PK[2˥3sNҩlʐ X6\5tN8ʠߝ1WRF4<̷yt*֝PiaNB@'-:}=7Cc)rvBQvtƛi$>ˎr RJN=f%^)0l-L.TFe֭D'sTw5.fxAL : .›V#望gnJ{CPjo7oׇեJ(ѩ*PT&K`w;U.uxf<G/|NO&t湅=;op&p~yةO@(Π#B׍e A\(8d`X1>bCL9vR0|%gS4aM\O}(ɥgWw6}<$E^ֳiYc>3cOSŒ3ZFg mx|HvyWˀL J-Ѣ"MaL8QkiUբemEYծ]+ _Zl[u&8dHk\Z<f[!rFo U`ha)1H)h[u ZHrP? ϺjQRe=Abř:uB7V{0"h-U" U|p~.ϜJNŵܳy-oN fqmu8(u:*([^ A\S),ʈxo޾s8֔NAE^ ] o69S|#gb_^r|<)sJrT0PK +7L.%/:0%Ht v:IOhO3VCPʣ!rRqzCrSvU.zܧ0V8z ɼ5% h),{krr;+7=LNTkߠHGUQ Fb֊ ^Zn#c4::GPMˇ؆z7_Ue]D;z|x5/F꺿VŔ"2y*?jU'HX{7O8O\7hTnC^wf%F|Y/:S}OӋg0Z+u0JŬoەEh<Ӵv:e[AHSM4$a`mfuG-q4 {0bǣ'wu cVF6׺hdSMm.^:q#boF`S/Ux?B f{~p}/gCo?}Q;>?}3_Y14"<<dz_vڈRUc}9].yoWV >VEղژ\+z|(|?xP Y-A&\GGPlT١YT{ w_B}1%4z*! 5NS?j_lsLva$Q1IF9KTK4 $)KB[ϣH'aګ ̫jEKDhy t 1.Y]C>IъL9ͬI SMFEnSȦNb3{fJfZy}vZhxّi 㿄v1/v>c16dAK+QU:V? ӝxƩ F_TjEr*eOep_F"CB@augHPG*p_y-U 눥HzIܸ`*ȝώ)>(23a]_g?OszuzAw,r9W8ll 9Yg /N;w"DGI`HtǻpvF`u'?N:%^8kJCpZdxP.[y$&4Q N{DRJ2ϻ)4e(f\r3N56N{1qh<o*uCq}ۖM^9SlZέذၠmNcrZCEEίm))[bh/lcP:əd&p$NIj%YN@*8'Q6"BHΑ<B!Db됌Cb1(R*MR8;md)b#cO,\-&pG,\J6WE],On'~:4w/Ggefzmu,z!rmbP;-QzD)ri,3F4l:-I`7dFs \i)43h0 Q iJ*%vvQX1.ۂFǾM 6PcF;#@h*mb8r(p-!2+F#&(I*mI*BFfHZўGX"a^!qD7 ٓG:ꇎblڨ>7/c4h """+C7g.e (!JQ ND-X8 j{j*|+e8~aV6u%([ YQo]R,C4LU[חbcI`٦{巃Ww >)5;fY`&Yטcik}`,{ŘfB)!_ jEU. %{+Ǩ]׾oE6fo:ܒEM+S1DRhFƔ7GA:ɱ e:Hj+O 8Mh.JŬ`Q،KjY_TKiE?#]7;Al]wCe?M/f׶ݕdޢ8$&UZ K!P# P9Mޢb;NLtY(PdI.R3HR^qJ<)MXLgI t3OSS嵣կ7d6lAٮHEtq2&`CC*u(`=RT&:Q #ƥpENeRwGhغ8{+<>4%3i4B[ !E2hMqi)ˮD+si=76t:ic*O5sN7JXiRl瞈=nۏp5PRs_~^]ϡ%疈z8gVbbp(y %% U$WXRBo%Hhfe9mNts"aKCϓfµJq#&&P \1#5*FnxP IH buϧ-Eg݋.y'F&cյDhsѹKSxFEwWxw|w5{s؝ɖBlV7 \^}T {ZPʝC_h.oqu]w;ܾ·<_ې|+}94_ܺk={^>j&}~rl\)Y>hEָM?KlI- fCr`Bf^SRiFГcifu/VynN=z3f3L/BoM򏅂U:pP_'syAka`~ 6Aq9Us<{r;dFYNo:iP-W>c"FrNAf Zs6LIPKՎJ,YAg%&O7gZztzwk4SN1,NC`╕bcz7X?15 ]gNJ )4m?{֜FJ^v|vJE~31{bl}qUȗ~4$K@SeVVzs"[mqyMm>c>Znܜ ֋b{a!UX, 8:2My+3)2 1{}O[ H'2+C&~p$k% j fr<GQ)JL=vqt&,;B::9uW͹:ˊ~@yR(jt>"!d=G᠛ZU/w%) dkMr{(F+pѕ,^( ]7|b iMmXS;c:OZ#H2u AilM6xT.~Vr{˻P!(n\9eS_ާ^Ko ʴgQr0gO >kF4|n MjU:-F*.ey(U\ZAWqT*.ϯPTs Hc5[0\c!nDƆ Ц)h= o MuV`:G^pm9Fl C- ![CZ&l] `+&mԈGːc"I0U[4"&`k&iZj#k?몕BȤc"R ɘv<&&iTXgD [cD+'˿!T9T\{vc-?UKKniƙHԞhM3%% Ô+cꎋ['RMdIt# {[(y73aڷo&l/Qu= FG4 1`_.,uZ<^RKv.:m1%Pv^3,g+'M-p!kkſ*e1WJFqK,()cy%&La5K6x}rN-[L51 Кqi㹳jǘ2D*"T I[0TNZhw89SֲXZV Y]ԟ?e%r۽p &4+l^[I%,6'<k,DZ jkV؉yLDžBZʤԁpf`1U>Di28SڂB)&EZqLKCZ|e Ҥ\n /΋3u/堒n~|.9gg, (ktԃ`2`c?(u~N>^/9xqSU_:09i륨]w1?sd}rh0c .cz:J2F?Qro1(5lSK-J8b=وtBy {BroȎXF{8O06Pآ^O[1)(򌼜+jp6RbA&:'>.'X)aK8x(W[c.dz^_O~*?z|Yj%FaBŔ+Ke[fnx;$*z/2եMۍe=I'.t֥ [X!~f.]~0ztg3^18JJ^'\벱jƙ:8Y#cA1TduuFNIѯ~nhTL \8o?y7߷?|7o?/>qΊt nL¯7 N?y׆57iޤVG]3~myIw$roivt.c_ ;#M2''?fWV׽v Uuà3-jsT{#**D f4#CyW@`OM>{Hg틕&ޞV? #I"hҚ(Ɛj.+& %gd'昡G#鱽 ~L5žh <F0c|% j Xɀk5)i$Xaxxʡ^fjM&΅n37Ӵc{煆'Kcn@'I 3] 9DŇ{#՚!nC:pqs JVlGr+cڿ,1*^'0 Vx@@cb4֊q/9o1x[G 1$ '2$9l=q@>Q\anѼ#OzbV?{ XžśzCR)K9vV΍h0#ޱhD8qҚzD%8%6#9mĥyoWf+I\z7mwEKPRH!iLRK-`ĆHtQ!*j d PU="з8Dnc--c, [#IQR/5KP g3xHȕ`_ȧ!JM]xUڑ v2g%? \%9P˰ J*ȟar%ԕh+ q( eT*9;LxJk8 uF]er]1=uexUճTWF`~ =D廿WW^yKʎ| ҉?/gBL){'[=BaÃC%r?2r8jտ߽u~!;/*WƍÏ:%{>wټ?_nxo^. F$ up!&*̕y_XoLS7ы#` ZN@2L*~% ȿ̼&(DNj'i~^N|Qb'/' %WxUqLrB98PDEVыMnI;P@_zΌ69ݦH'yq)pʩ3>8.^m[yϠ3o7BO8_یfb@c9fr@L{GP);BgŌ\(gw{ +/\wjgr uӰU|ҕ/u3UAΒ-~+Q*īQ#%z!j≯=& gM*%}LpHS4ب`F?d4KUiƳd;q2Jp䵦VDX V%},u֤5'lI]Lg sW^[9FT)ID\ KP$ɋlB:kF*0ք_ˢwe o'}wi@L\M:)]OU虐N5kZZ J@8Znyqغ&d4\{{ݦܤ$X3̌\(̵ْ6/T|#q"M{Ҋ+Z[!4w Eyp[Lj |m,N>as+v0j M ÃW x@ 80;sjn6 jQFu]\d72.f7{Ű? .LNsu̩6Z%fFZ\Hqsv몘Q):`ǏUP]Nw q)w>U!z,g.fsn" Ug/F:yLM}^OT'%a/0L8۝N6:n$tY_E]cަ>\_| % V`,9 Wa|(\%dh=CK8ILJrL&23TWL1yO^U-yro~|q^ \=43xY NZY+& ( |WlR/s :^rmHK 8).LYPmC@֦}{9ӈw5-)K7P;xɚKY_ϖ*4i%dK"_O̊"6vyU`sEim=QיڋkѲ<.ly he1J5gQr26g"RaV3O{LLˎb@ M)_&ưm uH}%)/+sёhDs4AYRX10"'CU0'WNk֫` mtԱS1j"䌧$fdU͜˯JUZ݀ 4s_qP JNnϕ1KUE]}6R>^s`NEw7JPrm<+Ómp)RWXG+Tk1i$:ͬe&N@qjn &Ope[m]/>Z@i){J2w3-7C`xi9g*dٖmm`E֍mjd^rOnlL4}1ĝ/])"ZE^Ag4RP3YXeU!|^S!ƻn*)͢7dWsr gҧ2JwC|3 *&ʫ/K~[!|n{ۛmP@Fh.29o5.Լ7ӕholHNXJhΉȥ:]EY \U5i^"<[} \+ֈ2h#C%BexT#@XPkgz#cڽ vgbqXocaڽ;CZ\oMNz} Ѫs6KHb(@DfBT @$'\LC90 6m & hbbcYh*!vFnُfR׌SAn\֝Q[=2]o!'Fe=C"Ɣ(xL-L0!;PV,daڊ]`M%Uo/^`"F5gZ^݆F} OԀT|<MgD4#"S ;p\Abl(bkb}ay@ZFZn}X> gM_>z"} _C_ASe (Y8?;4=kƎ:e9{YJ1m+6FSHy덉YՇyb1dl4:Uh뤫jP5:Ġc,/5Iޠ5HǪ| 6:݆ۨs`]2륶it/]v(+ ~]y#6Ѓ:Eڴ3u2ATIO]h%s )4V+ bg'(}&8njS2Db FLIgD&!zݻ goYLԪ/Vt2mݮ-˷,I3J'TPڟM:6\82&m.Q~NV#j&/s@ɹ^rrk+MC;Zvf 6Ɗ l1\q( “\+-3RΓ.*c ;*oX6嘣\my^|jmɱCZa;nj}Lw<&feKIg& = BIb+RCRV*&b0"O乹}# =#;CIY^k"HdXIKZGhJXGy&4GG 8.9ΟoxL՛[RKDF{X-T\T$Z39*6ݴkϕ!آġ}z)m]m0N|7^6 0LosoAۼ 0'TWe[kK7IZ舚0{Mғr_S"pL͝qXݬw÷l YvLT󑇰\ ףvh`߻4Lt%;p-yHs[&hjx9|ߙ)&7]I\hX~Ἃ'N_2FG z_>_.gnնN  򊒫#z Z+CGͫ"A/F&Gb1W)(1%LaN(<#tgIĻJ/s 6X*݃8}m©ozTHE2RN5Ţ#Qr%$m e}wރniZlƯ7yȣ? k<`̛ voȾI`nr@gne#񪆒&5$vCVDkh)j<īyxJ &l*S Mcb/eC \Il u6 $ع8V{uŨN1& LA݆# BKthY_y}R,?Hа;TzA5'#|鵊Ϩ(IpgS 7s)J"MʞzQ6JƢ$`Qj_w^rYpLW_RP QJ5wOnMCfJ m UFE1\_H'OZÁc#ȗND5 Z0礭%mX՚kઑ3&ppbyft#ږiwg!e:ܓeUfeeOf_.{^‡w}s;,qJfy4ȝC[hknm2 WrӴâuD%:;zոGvUk-wC+r /s_/=]7?>\Z.m>}hj#:1ѝ0/wf7r%+j(V_wѷli` 5 ӧmyQٱѝюczSGx>;03Oѭ-g#M BES5 Jpb`CPF`;QŠ|.q!h11Jp1Y11b|%Re=א4פb6] r~2^{|}n ]y-yu03dWWJMhu'NAYNKX>z ܢFRd9j GQ_ki:DBJhT#Q%rB ̮R!9)LdcPUkGEHɧc((J>W|ͦp5=9dS1XA~ʛiϨgX?>^'.g޽R \7Go-\l_~חMS˭Q7 >^T/f?l *-䎸%Vh; .⏾߮ uȐli~Q 昒dp>| @C)}??Lg_?uVGH5(#gʚH_yhTKFl2~Cvu?εHD'gUR;.^*D:>=$g?GX{ 8F1Pg{}>ݙ(]k:h=UFSuj]_5TG#H&,p:4 u#LN9.zبk8'4+';w}O<wwǓɻ:O}z 8?QfpTǂf/fᏗ1_ҿ~؜H#iaҍz:5rNRf{~W ·o'e79vV:Nw&jĽ .ӯ ۸̯&>nٓ&*~] B%B4;1 TB>u6w$)ɽk$-6$*B*ΒT T L қ4Bl@Yc͒ZXㆵJҮ 3WT%ӂ#1>dQr1Zd28ϵQ̚pБ"B=iP w46'U dy{ܣX(z,-u:{.U~3[&8໽qvh"FP‚7`Z,q$o'=q`xdtG9 $`mϚ Ej m?KD 5Gs L6&OM1He)t@0A.nS}'c =7 HCdەP,&V$瘷Pch,qz̉Bk ,%)yiwlcm6Ƙm$ǐ4axVƒcjC:K@&YfETU@7ic\k\ "8O` yj 6Pp*@lUDž,n? 9w8 `ÝFo޴;HXv. "\l Ю:3bDZ0VEm 'AQqZVow& 6QbBT4MDqNvRQ`JSQf{fglKyyaPfivlg[tɱÞ܄^=[tt(Ajmـ֐ZC^[6-Ж heZ F^x5e2^Kـ̭4^6 sP- P̰ ږ hl@[6-Ж heڲmـl@[6-Ж hM6% ¶7R..~V\_M[ :fC(@yU8mB@1VSA5usO.0[?w _a1W6[$5K09x:a*d{>rgK贐zbL udFJS ĩv!ă2L,h5T"=֦ef^lh -Cg}q=|6C2Q<3l}TcM>0=_,ݙef}壯%(uSz3#NbasV6P'犤1 xǣ" .-V"lE"<ƭ`h1I&P\`)J˽%`&J Дq`AZbsjfcsĥTL$1&(*b3),urT@ dZߴK_cl{Y";zS^~_FfņխUxܞ^Q6WN T?"g,Y&CC1$PԴfs38sٜɆRBF9O'%"(I§R,:iK-Nl'\s͆{ P ra<Dk"I #RqqKipeq3B/@.*LxVa//L|{møe+ZRr$6kO\R[z!<=$x(~)q#;W$Y(ךY +8hO&>fQzrR-D4] Zl;Ts:uPVyt;wYC%]YXv\ FGo Wr׽ɻ\5`+vi=okZ\1j}]gNgnXo|B-@ɋz$6z/ڭT{T(> ˋ8.|ne)8kdalHEָm G r"ͿzaUjQVEY] @2еٶHk?HKSl i{)ݕW/VL8B 3[HJE#B ں+h%*IDsQQ-=&iLXg'鄳[ў͜LLH fJsἔ sW>&|l 8SͽIxx+() N7RΉMko3wHio$-5b7+mKQFWTIST_ۄ {-u2\zݦBnӏSIs*h3) ~%p R<iY2'cqЙ3'#G/90Z]tJW<-SVY&5,IͼR2x,0ƃ`prEhlV䜈Fr"7p2FP.*8Nj"0224C47Ά)]`|}r2'=_ZC5̳ۛYMbM9|gL557yX¤^eJ&q9< JEs%\(dH5 |CH:^(|fHx (olʛđ)%x4SUR Z! WD"qA0XM({هan2#bZJG9.h%WRefK1!;2Kl^lִBR*DA /Bd:CaЊ8bjR 9R b"I}0ep̧EXSFaM-B9f!dBlR)k㿎;q}98X.v =uk-( F+(3ǝyg;w|LϪ~(]k:h=UFSuj]_5TG82H&,p:4 u#L+.zبk8'4+';w}O<wwǓɻ:O}z 8?QfpTǂf/fᏗ1_ҿ~؜H#iaҍz:5rNRf{~W ·o'e7vW:w&jĽ .ӯ ۸̯&Bnٓ&*~] B%B4;1 yWB>u6w)!Ƚ%kD5҃uyJAD5U!g &%P*;nX$0yE|\"xA8-1#K.)E&\ŬII )"*/CFu*sScsYMfnx.rR \պ[R`bkd ۻggQa8.Ρ_&*%,x# 5/W8XAv "{A; "[$ ڞ5D- z*€Uٟ%"ЄͣZq9y Ɠfk"^ *@6+@e$A6#:Iks ".WPcqp,@.47h>d'X՛` }}d&4}*XLnn-b  Fd `NZ3_Pf)IɣH[ g/}mr7`_4Z&/Q Qc1@mTg5{ }TP$L*4R#m#u` `A牖4$O C٤zt%)$ƥ Y2 .^8r)>iȺJD;/ξyzj!rGu$NzĴxuOxUd Ю:3ٻrW RUk^'YyY+7A_ec`h t7hfBdըX Ru XrU[(LCT'4Zjrl=Ky42Wl5U6>niM}-Ƈ㈎ք1(x5V)2Dū)-1]B w<9-g.( n5Js*OD.򟳎R$bE@t P@ҵ*. E2~Mb5Pu3gswS'zyqvhó>)%}Lh T8Q@=Z31D,zTz9r8!S;$ q*1C tD/& [CbSc_HeI)k1Ƀk|%jQ9@-M.`:4C%$x>fBIB5`k𙋊oXlkgg1$Nqœt yU]4 =;>ݲh붝͠@NP>Y9+o/^쵓7% ~+hzx”$ L\M^%$a f4HXeW`!AWLPcU;2$ HހϤ{?s_] 3ӽEqynޞB=)(4s?ŐfmЇ"V C1* N闯g)g\5>Ig=lb}H[-(IL`[G&&pMh ܄<1^u)G~OһZc0v'C%TrL)hU)^GN1` &Y>9_uv?-U[[O~Ir۷^]i{ cǬ|DVSs),)Zy,a4d(Vrb4KUv,r r5UΕ4ɔ$Dʽ;zEǫy$jJq~yPٺgb^Nv;3 TW >Ț.)m%(B+*o.I 2hN+(stQ#8'gJQWm1ԷzǥpJY [+WgjbKPd͜- |F){n+Xx(>Ij`XxkQk !Y^v=~xvvS02bم Wt!Z@/U54vVF"#_ *InCb)BMN~cT I *6|oμ^lFtq^$VsQ[wFm=P{DV8h2jJ,!k@HPǪ}Q6K>2:Gٲ*<]^]ҝ/'鷥?_b73t.%ff~4zbW׼-)|0jzÿ*puq*]Q3zA_U/I_7x`Z EG\cfCf!L\U>Y98[I&Pg<\~=c//EW_>|sg]'2,mm6mv|{D p4*TTw|PE4 hઙKXYkUzp,vGdg_/..??/.gu+.(X\wIE-w=^Gy`qFf*F,T(S\ɣɍȪ +3oqͰ^}Co ݾ7n2t}Co ݾ7thh}+p ݾ7tn}o2kz)GT h~\GSݬpE?JF;,! Ƹ7in+UOBG;۳9;ұB|Co r!7B|Co !!7F;B|B|CoB|Co* c(Q9dLߐ2}Co !7dLߐ3_ix.4Nn?yZR4c[5e.ϸC~3:41TU34e!/b-2?Й/K 9r bDΊRA#r X L% &iW*D  ]7@7PAUG_]C ՈSB |U)#zIuY ;-axQG+I&Zc}*ZWPҡ{01 gYjG6B*Ą<sƑA&Bd%M҆jI P0^Q{n5cyci/<g4kJ?sʖbfRफcUFTI&9hb5+Hy82z{ {OW3'^pwȆrͽ(^zH|;=7䝂#£n" GwstA~Z9dhngKaEcNFW5h" O% U'?Nwp(:QAtTj`$F J.zUf"|NB1Q#Jj!AKȍh@41c(6&9v!0iĖdPY'!%'Y{ٜ BO$x{E=r_;Q9+G)RY֋cd{ʈ;2ȥGx)-LUYHE5y`rK#Σ@KWI3c3'$@2W:: Z`kŨKo7ٟa3솩OG^ȵ.zݬ.c Ofg'sV&di1zr;g&>[9;M|B9]w{͓/÷*?y5C=\Nae3݉߹XlZ73t3"^qZ|c^3?9F~y6rzߟoF82<+SR&rMɇ2%r;!=#]wA߃;^ńz`&#(rr{69`bMتK4(ؖcaIKd=^^ߌ[_~>ɓ4nA}xt豐~),b"U`dTFP!RQ9a5hjsIRvm-/ {kh W1gq / eYB3dѥkƝA?ɞ CP4dkVSb-<#H\A&Kf qx}7>~-VGCzRP;;M )NRoƲx + :#s0C(*ȷIr8mju HF[+:nÇ#vW|-˿l6_scf?؄n!ц;WJoZfrMZA[UoQ~;)%/kfenYeO^;};H3G5Ɨw'<ύMag1*P%29cL\ cDdgU\rSkƘxĉ]فk;촹;.NO{f'X|+ォ]nwLP Tqx5 h]$qk+0iV^V?&*kٻ6cW'#uwZ>x7ٗxΉ!&LHzxuD#ihlH$g8]=S]]jD;Yg-0Q; {/떡T*zWRTB}kt%j9}>Zv}G[;/?]neO76s% h52$ D1#8NT3w3}nsK?Kߏpg6hOPAʊ`Nd-G,EH),&8antZCSJN&$Kn吹SŘ^doltAW^CښYHDFjG6bAq٤XV b qo-B^E)S ۅKFc,-&$ 1i+TыcBGtM.nQ1Jmћs fV=*-%>oM7zPhLq,8Dp>Iq6S^E׻|m bp^ `!Y\rEyđ G]f}2C8ri:󣫣Qfql5KNW`=w]bV{e=^rѡn0Ai<+s?&.8pqkH{Ơ<kxFhT)Ґ(lu q}D O"Z*s|ܘ#?]>/>x%0.Ԋ3m8Σ{ ޮ9Yn(mIۍ#IuHgLlfY>*i7zz=l8{`roUG]NrӨ R%M5RV>\ri Iqȕ`2s5E5o Ã0b)o\maX8ۻqq8 %jK|z$xX"ǑWlZgpJ[-UE b+BP|,JlVxW&%|l TbWOI?a7:["-oM[ޣWtI4f4iP|Vr}g.GcEhF B#w^4,o&im\/j{fwNG?D+w!`&@ϒH@-Br19>86mv!oK5)=8EAa;(t_ s}g0`ܿ VT >uKyqu> 5;NZ1ꦄ)jEhpfUy& 9h3hЫb/}DT!ŌLX4jTb)о:HbXrTL!-I16;a#''#W̽Q: N[%XrǪ`glӠ*U"r ~88K9~C͎^Y| \vQ*7jpeUwFoCp64J}X0N%)scb#%SO.n]yV]g3{C.sQLSZĵ9XųГE)AcRJw. ];0#ZfgHAV51ȐQG<71 wuFm_JtN2]p-Q_϶z}F싻׳J.G7Ǫv 81`oQqv(}c 7P)MƠ+"X1~0⪐͡B}WJ-+۶; qEk8qUUx(Pk⊨4BWq݈+'˷14}eq4r+O+Ooӣ?_Ɠ l帧5K dViZ̝I7Gf7I7<䢇ioG]2NKsVy$⻸m-v꽒g$gC RȕPC !J1 "У}4.ΙÍbǚ6_"4,QZk,QL"hmE֩ZTf]&T2pt2pt/DXY$D)!fj%BOCXbJ IɥU$=rR"K,Fk2'g q3r <(ԡ@`3(lj& cV!㜍*rxXPҩZ0m dƊUkZن2B*&'IzZ$FDD%]BM.sEc(ϯi3>٩MdNk==:]\Q7䒬ɐ-hIqL93iOdn*-;#qy%bX܌}ǮQt,EU">!+P96kM{:T,X[iN%EΘ6l$B/"9(NHYBpPq9Kį=jOrq,W|qɮrQv,*3ldڒ&` 3; IpBeI^c"u(p^Vr/3.Mt4^H)큢m4ȷS_o{ԋ" ˘-'cU%"z5ڴXieŽDNX5YB Qk 0teRh-H3V 'Ercd:h՝ew9|Q#ߣDܝv;. C7[ظ9=凌RĿ^2.k҂Gگvg6׶M3~5omz. zͬ"hA7rDѳa9K<$C`,h0 i,8C#W!0ǘXOb:&#gjhI L6Kp3""k/%%qJ7bi1g;Ja810tNͭTw.+/ɻ+$}h=Y\Y`C gx_̳7WL/ʹ]nBH\2NJd<yD$Q -9+R 3TCm ໛zӫ G߆ d+m|mYu\<K :n rP+TlӖiOLD$R]^iGQ+R[0EEM0&3zHL?,B뜬l^5E|7r٢†uiQ֛ |`W{;ӻɰܠ{0IOԋ{=hQ^p UǓ=WOqz1\C.~ZO~vY($T#"ᷠϾK|ÕW*v/皞ZFp.  zĊ3"u렅&Jnq-"aVE2'wH8IqC&&[׻ب&7 nL[6}|^gN0kA9Q!G\RJ4f1Gfxkg-.оiR CM^/P]k1k,vm5f 1YaqM~X^V/pf4N*.*Bc,:XV0@h MZaDǐUV'^ZrE裟ݤN dБ8#DH4 ҆(JiS \ K\#R1G EέRȰ, Ƹ#aREb?D h#v~]!a)cGoY%f^_ݽ?DS\aXϨc)(B3K ku6;~Laչ%m>%yq>gWI]:\ WJ3+PX󁫤WI[O`Įfz?p%5 f|*ȹUVS!\)1YIণjg ɸPƣݠJoIgĽ8!ѥ$)G!QtMb&Z3U)lp5{>40iԩ*h3 |Ă\L곁W-*iIx7WW0B&p4u<.\=M[)WOre$J=TzW FR.c\T1Nާ*drABX}G_o|7MgM=^;eWLPsIo? bD3iPR}60\`:iNAK2LGHa@aF1=mҧWIPp~!VX`t^;`iXJዔuqyx˦:>1+LEDLNVi_X&X3.9ÈbqF 3v> cS?I[!Щc3h))Soi}?^TYWvRxEd}xɰ`uɏ%]@#(a20ϦS)OMs@U~◙Y-*Cw=!PvtZw=5Z\+r}\+r}\+r}*X< D\+r}\+r}\+S(@Rw~{7˗my}Ri,E@ T(ނs5wh.2<'N/K}BRicѴYD A5F+SA XK AH:o;cƌL"^ˈiDk45[!-O/+4S"[Cu:vfedUְdvE8vyC9 ER'-d&\O{D׳p{ѣ إJTa+&ݢTUԀ?.&TкŬnНrEʠowf%ZвGk{e WZhŘ Af= ‹ǶO}Cs󦻅/}헩+ۼ퐔??WeHRjCWyƗzm!N=u}A/ٲ? +O>N?rwU6N_PX" a;ʜXg&Hb#qZI!QRlY+냉KM-a$EV 1qSLtN˕5OFx&SZ[8KJn>ڿs 3E|V}k8-]^$sD1nQg!h3Y 4ê_u[xظPp3I̠R=8­uIo܈m7"H}*7j.D+oAn]ury\Bn\& á,5m7PBߐNHWE Va&LZzIxt)ĕA%9C ]ٻ[NiUfzRA|sW/iy`́Uո=;N8xҿv'nx`ꔔyECJ\im|4:Լ :'29f.c۵7)х3ZJls{Yf=N'ԠgN)jkiP%j-qR`16eN% !o/>?/cf@y RJ N=F< `^)b-L.TFZl7FEw_gUXzv5/Q|?w{c^xXu RJ~NӇ>̑@t 7C+gcnB`1X1b8vR0|%gS4agzˍ_%v;L2 d^3F-,y$y`.-%KiK/:aXX9dl&i}sQzrMLpZj>ذ(Б&r8L9d,Z%KA)$ $>@؈&7M&вD W"`0HAO/W8M}e5B?azAn}᪾Ϟ/їUd5=-@kx1nμ w>~2 ݖlP+Pl,h]J;Hu H%'t0( +\1u26%)Uo`P-4yIK6_'QeH)C3)7%@s4s~nZEZ~t)>vGW1j"={A"K9vfEH-Ox<6/02O[܏atN ^$!dJ )V Ƭ}bAcj2 3!J B[="]4pJpQ S,&$xN%U98;2-hrupKZZ3J@}Cn'ovri~oi]r[tK&[0O֢ĒM6"8sy- 'ͶxP+N/4qp;!͔, wY$,VaUW^epJGsK3=M4R`ٞv=u> E{C &.ꏄiyt,c( 荴Dt=0opD%Eo|!~itw[#ҾWȟZK0B ,.jrqV!=1zKi,N8Fph8[ [ UouMn+H%I) kRƱܚ Co{\36 57҉_f7g{sEw?>?}8tI!wMO7TE.MW| ~?} 7v[gkp;aKK5ݢiɂ&y&Z͸qf8 -ٳ[RT}SxZcو,fn@e3 Pc?zN/Y~H֝$u.kV &g($@iZ bN1 蜤׎6,l^NQa"ysRIЂ9-TḪ̌$Ǥ9yّ»ABa(mT9j:WePyy9Zd.yLkpUt%4^,S|o%Cn =s5'4@6ʳD2}&\"qy W4sfsҺ7)ӗ >Fh-,H@w `ÙуsEV84%yyL,ka*E [-6K5qVwp.=9OP-{kblaE>>dh}Z̢D܌DU;^9p)l7Jp99;yKV#K y{؋d>)K}(( A#AGLp.0=Tr&*r(9j: Q}le(.qBUmyH\Gmg2B]H+Bd`i0+CjrgkT=KT+y\H*'#ѺPvCc~u=5 =fn-fVw b:zjx97Cu_h4v Fi >?iFr<@M@M0J+h%}RԮ{vC5:ͨOZ*46"7"4ϊgfdXjmBcT88;@#lU2] Wޮ' -K_Bx[ 1H`Nթ0i>v"Cd3p^7[}K3k1pgYF%`Mȹҗb@|$SE#}裈%˛`P9iK :jmDJF$Gƪ/<7B(\*ze<Hn-IW'㭓xLd"- HPE1 ^3&vt~{ Ƌ1 0PP$]I(8W iAdE.dDX#+yx^m{͔)\DUE܇l(Mv2Rɘj4ނ?li B}X"Z鼶"c "8 &u+/{ϫcOysL)jaẢy'HLaȌ9);96Yqu8v,ZWAŁv/r'.R;ڝK,hڀJeLeΨȥ rk9VMZD]ӥS8ãQV;(uLfLJyQTB9sLTv9uGv9{ֱ9dt3/Rp:~Vv 7Y7)!a[Oaށhp:ԻُƧ&s4F`(n8A; Mi8 vq }t[戶Qif)g{F0(cʴL(¹Ybڔ7la=1΂ bSP\}+\d>w:'VM㊏cb(MStjêf͙ATu)Iq%8L:Jd2<Ζ-%Y' 4743ȲAo < a! t R'j gDqyNM  Z5=ًi>*y>~O߻Nw-#=2(6-4o9NMfk JU}#5@Rq̂2f-1ѢY[EĂ' SnC50NUhrD @ʒ 2=K-M䬼dHϹV>mC \5RVӌ=mjj [/5Tg wKJߦiv nMVg=l7aX,b5"Qtj|ZJCj-T0뤄d5^2CBX":7F$/r@΄"(4h1'-rK[mnZl_u 8ibմd_*E.>Z gVp#$,3s1x(8?Yo@KeIP]| x,v=T0aK> w)^khIl3m)<^2Kh wHh ]?+͌דѰ iKm25wl FF E}Y.~wD\( Jrh\v^ؖLnhM%AސYW %L( Q aBB :\8#A"L$oS7*j9>67׻#y(fm.n;@uQ+nS=5AiE^C 9%#2O9zALghAWq :N:=xr Q@.gY!t4ӤʑC&gFӼ`yd>cCާyγbT"o ZMp_3>M>sǙ >L#!4F{#gzo!ع8tߖ8F7OЧ3.\= l={_&j/˾rp3YA,LN\9o@sJ>K,h4nQy "$yo{~'g[&,?k+j[i{?l0aTRi}J!Ȱd\]~2̭\*޺:EWbu6f߫.H:R 5 [n= km{6bٶ ZWM#ie؂$G]L!s E X[Z̒M@YYR%*%GJAS#&1iUb›ҭy9ۨАH1&0KmԚbΙDx*Yr;sN7ǔM?—>5S-q._?wB U>_.H&Ojxɐ>Z)Fk,)v6x5iGSdXQysɎ|2hiܬ5fmo+1M7q0/̰T$3JW X1/)‚o g}*,G`6Ak(9#Jo">( 6q@_fJsc31c곞(ehг4gZu[$)N$J")!d`$PO1&$L+Mxj#6nXЯ-L9GE>}nu9|m^&ryQgP!h!@LIe͇xu弮')6/*XQđ -mզjӑVmV8yx+\+>dZH؈ICG @6 /5P21|P ༷T,NH]rgW30lUUOKŔ_y 43o;HwzV&b2N:Sl&g )cJ.ǟ[t&;.lV=}6Kزn&>Ayvyȅ-n/v7:2?BB̥/d I?u k@Zߨn W 9쵂g8:9h3apVΚOaYQ?г2N;9LJ缯Ɠnrnˊ`/`a\2ҹr؟EPK]nqMF.f%܇чeNCo Xl$#SktTE\Z+F]$]YAWS[_i^mP,Un1[}]=uH]ՠn%WQhm*Sw2"TJq ) &Wdܬ*[Vcr7Ԋpⳛ$q&󔇿eb&ܜ[ {@޺"BFޗRl5 u=%1lߢGOfb2-վyfAѲӬbxL4-h!Of"nx.GR[e#ɰ(~X9FZq­ՊdH1PxOWȢpi(o@~`oЛ-.]Dg c4=XIϴPN*,Np*F^. LoF{jbu7&kcR}zQY1Ή.6 ^W}{Rw6j{n ."BUT.fn-?ȥ\fV]f.Jn>Ybϛ=?3 ޿O"[h.I!h Nr!ɘI[ -.?GJ0rO%V:f1k"ȓ 29IVrѡxB0-ZzCa< { F DAC]S@#5N 3x%LZ6-aBcԼ㼔q.l928Te[-E .Ky [^(Y`0-ì|?]qY*t?mgJ%R7vӜ^ϻQw)EdNQcrPv`hT!`eM'Pa|qʰi?u4oݙ5'"cu!3P&\z]^ۻ]qu/7Y._rr0x+i'!7t] Ua8`X'Y,Av1ѓ>{׽1Z+#{] ת.^:ɧX#cp2iT aduեg3>Co!P&5#wBot~x?}_??~݇:.?o8SYϪX0lo~ݏ/E܈:]7:W+}|U~Kc{p8w}or@"t<./-wM{peu=o35;Mp˞(W @ 1S>,H5`(ϻhn !wc>^De%PY2jƐw!/&']ȕ.F疵Fs{:(#-^j#"Jo$#VrkCQ!JVd(jlJFKh4rsTēC4M.=zch-'ʕ-76e-\6x) Ms p}|?M|Z:*1; Wxaے/V` zJG{.{dX-B0 x@ `Ï(JXm &v B w吶|~3(4f&UJs,aM>/}POr|-Mph,'56FUB0/ cX((sf-9P;s%1{b(3fFfmPF Z, 5A;Ã}tP&l)P 6`V%Y2@r)+xj 6Nr ^~藡Vs)xH̹idzjے=fWm9&G׭ ^U9\\u?( I"SAd+!9e-k vk =T2xN иTBU,`%M&9'`T0m( j!rXbPGW1 B2*Z驎7a;5}_aF"]8rl[M]xB.c^ ][o9+v `^NfX`s3tf1hVcbKV,[m)HTͮY{vl:+rP(WѡLh6;gK2 ّd]4wEV,),i=9GبXhGRG-=v؂bbh_Nhܣ|QͿ]d4ysB>ht5SbȦGx6N&ӋɆ*m{\t4<<""ίMO\]Sf*yNzN.}W3mﮝoSpR鵥lxa[gxJT*!Es˘O_ׅaW_;'۸6y>uZZ$m/*jL۸~qٷm\egw'Yگ Έ\"dkHe*1Aʅ P#lYIԽLal_S :@dMQ 'QK9Ԛ*"gd)CC&Zm6L gvOh (^+E`Q[KR 5.l+Q/؎H"ΫZAZ2d"H9֋n7~Ιym/`ܗ\fzr'>xC2c?ur䍧_WM}ٕjohr&7EY^RQD_CI',y.YJds Vs:shѕKx[\@*J5#GKQ,KPZ#c3q#c; iƎXLh&W Xf6/o脲8/n0݃ b|Q/ʆ|QQR:82I AXks 9>(o5i[8a^J@ TM1Z]=] 'ma$idَq:k.澠v3ecԖj >P zF-j'œGɆjf%B &8vBxXu P-ZPjc$U2qZ@P N5a3qa_5b/L?vEDՀ"ޫ0% _Cj Qe}F &kBj!z@XF-S2BX7&AVdmJřcBMt@{J;NTZgg;"^?ȸ8j1OkʹdW\ԍq\pTopƧ\Vm=3XO1`"%*S4[Eb6wH/xL;nx ĸ!3r6OI{ ~}a}׊~5H#Ld/*B*UJ$Y F?t-[s`tg5N0Zr]T2u@}ꂴf~Q;Bx[~oڮ}ͅ$ QFYڌ¨2j4FdVoulynLZ9ͮ%3l[@I`|RS= іOȎ,}np5U5Bk0܅/59}ɨB^m(dDU2;S,9ːHh/Cl K lѰ rojKS"-y HޤH2L K-Tm$z˷8-N;NeSչ]n>bAl_1,B%{}k-b䩞%I EN3b1y^S>l,ZuSk~feeˊF~?ka.٨R;q:K<;6;43pbw(Ji zӎHs`H}k|60u>2M!!2pFV7J&y@˲34]ސ~}7,1 ɠu3vԙ؈rXuJvhZ䜂ۯIo dUT214TOKb1XLЅίw0ԛ]V3]0cXBTOHYBWB(^QdYM/)v+F7ƌ)"QkYCm9ϪZ^ࣚju"8ZPI^~}Na3?c|I:;"EfI$E玞pZ8OY-Ě|L_Ҁ 7_aU>K?x;8ExD @7FFS+AU̠/ 4YU~O gMNW@MYן '~Y*S~ <>e\ x(IZI}CM̀<$(ś**B׺RL8$螢OKMXUּ+/a(G6 Q|V:oG׳@݆>7xf'+._e7-e$l ?~Ty?|M_;.>vW= W1[7t5>|9!KbP`Hha 6U5ݺ#{U;q?k|gkKa4)P\!#B6ɱ1c#iU.Y,1%IJ۶Ѩ!w'cF/< dHN+~ŲlfI6(]&"֍F[=& SQ"̎}ɆJU4g?^}|41[4,rO%<yxCUҡ*PtJ:T%UICUҡ*PtJ:T%<`+Q=RS ] j].U`֝}Փqjlb[NK+vAR*YU4Pmi=Ҷ{` ;#r)D!)&4SAmcJ"Jh(씧{b(YSbDrbN!F/&ٻ"hPbEe5BQ/ {'6 feqGCuŕ"0-ۥI)g|6Y d(ϗ Hl Y6] 6JP|"S2d"H9nt|43-./~jF.E-r[NϽny5_~n 7fﰚ; k~ͯ\MٜήT/3T޾1k%%KQ9yR΂b5L10`eL<3i]ɹ(ꊷEb衄 G2lY ѡ0HFf< Wi4cG,\[mp%Qʌ-d }{v9K޻qAt6[ِ!*J` [gG&B!kmT_3P4CuByQ /Fˠ@_-LDrt\}Afڱ+jƨ-}@B[`O:O9 m=f%B &8vBxXu _CًN@&HV'đ1 N5кxA3qa0LǮ#q@{5>$AuhY!jϨcV`d@d25EDR4d4**gb 5YaJI+ 8S pkَ'ď:2.κqZg3-uc\\7[))9W5\w§0)"1@DjfI.>. 6ӎ}C>]ٜ>s?z]}Ţ< !ٟ}4OxOl!#P:Xi}^(yj v:xKg;'8;D@~W-ipix_l:4GlP ץ-K.dmqd;0aZV4 )>U\C)Z{۶B~$mhq&EK c.2͒z%El9a(2.gfgfq/+ DX3v W?lP#g\!r -%& $| 6&>mD#8QRe=Abř:uB7V{0"P)`"{&CDGahp8%b"[b΂wDðq3~2qOvzn;f.͚{/_F3CjḶ:ZÊ:r-/ [Nc 1o;;36Vn4ĿRY{F1O䎡hɾEyRT{qG5-X1:q@ $hn ZaDǮǡj|#nztm+ZԞ dБ8ՑH"$ L| NiC%DiEs)] K\5>(Pʍ"6֑aYqGkZ ض:uD- h)F!'ck^ܵ#n0ujW>:TLށ+]ѷ^*>HМ!r7E͝'N6K@VG^P/fFɍ#!T}:If U bF0z4:GoY[.lVy5+02nZ{yefzFn.|]h~;s| Y; +8,Hsvi8~Ǵ,HOUYnmDTmu͋MϯmqTDz3dne8Uv7jc>⌨9,6 ~ L(d,AH 0eN[fNn$"=q12*.vHꙀ!Uxd!R&R/5eDDL ` Xy$RĻ<3:;z&&ָ|<}6crQYaz`,m@*Z4؉'WҎ_mk>)43 xrDѳa9K<$F`,h` D8DH<#N0t W!0ǘXOb:, 3q1Zm#a3ÄAYGJua=N7$I+/4yw'xt548LFl7cܕ '2 ,bNR=`Q8Q9B4Et!JmQ@ 9u ic,8ZX]&hD0Q:i+Ʒ>.o+1YݝguFÁi٭o?(ہitJ ]]!HkӪ7bKCdD,id+Z&f4a PduEk̾n'낋f07uVS tpP?zUx}ap~aG3Xa/FE*?E" Hxӑ 0zS7eHk0* } ͸i1xSq? u٫dןUwumz߯ C[r_5;\";ŋO-֎a'` ԉ4K_`^_M0ӫe m( ~_gEgAm9ʠL":vRI$g6SUԡ7<;B?}W-O#ip!OuߢFֆMzcċwL:˂kF;\ $as`qo= ?>d=FJjVhϗ&s0bhT2N{"D|=-Xd;U+1`S,W;ѢrW[pث"@ vbuL󝒿8\SgoJU1)=)80 3jǖۣ[N< `<\r=lQh hD`=-t&ʚLf:47;&A|1_'$pw}5kdƳ̐b,6TP"Q&'EʵQc(DE縋 `Iع[9m-,|}a-V '՞iw"OyRJɀi@OW XpK$ 4Q V^@2LҢ5سEjr- iY/v-])|zK.GWr,#cx[(  <@rFϵ,wJÃ&4Pwkl}K8|f x*3ނ%QVj|PN'7uk-g񄗬0"瞱sP](F'̍)]S5"DBDk'P(Klx B N0D R1wZDl ha[0 ^!GkR_Aǃx=%փIFmΎu{t~4JBJ \[k4x%wC~ =ҭ Ds.`9 )#f;Ģ7KhHQSΙtMŶ~+Ymf=40)`B@)9IE$s2JJ37XG$ۃ/NJ_'FGYbu ;b L\JMnA%0.Șsh$Z=!;ёyR;jiF#P;^E8fB!84;\0jD4QF3%Mg#!v.#-njj`& 4NxK\AkM4Qb a aۄE}J sR &__0FO`OI1/9T4zP 5ñmM8{ ~?K:""NU@KV }fw *D xʈƨw.)pi7r :ݛ UE-6`M@ZX.=o-V\% շX.^FZJZdEXyN>ll*!RH4WI~]ʉ *蹝F:JMLWO\ŇRJ1yz\ccubvժbvpZ!si> 8_kKrmWDY1*2 S,w Rkgb|LMӐiXY%!k(F|K?*7.s. GpmuM6=Ŭe:s|?C6YW'Q}{}n*&~ v ǿٛW|>߾<_~<{p٫:{Kq+0| E/C@={Sݧ֬aT]ng }^H0}X,歌ف[Q |zY׃ҥΛyVuS?\XHGAo~%kqLĽZ5 + .}%BR(,HP@EPkE*35{~hVn$a֑,)Ja8ݤ@ŶIh`2jqZIg$=a.}>QB"yg[ Ҝjc6H>x\iIQqiH%/t:U~xk4W~8sf~^YyD1tCХ5zU4KOvExb a]a/ZbȉR4gKr ,]rU Aa~˩?ZzM6Ω@PkEJRF:ڵ_2&kM(rnA{aY lR.k%\".:ەR+hNH|VS苨>qm-d}k|Ԟ`<ϦLBWRlF%O{h3|hɝWxhK`-\P|wAK䯦aUlVѧ+m,GE  à}KwQhp :]%\ >I촖)YN3̔(* EWqZ%#E1~#oPTj1=go_BST6yI5watZ y1E)KtƓS+8Zkkkkȫ$o䭓uNI:['y$o_pNI:['y$o䭓u&NN;['yI:['y$o䭓uNI:['y$o䭓uNI:['y$o䭓K/Ý_0S|zeW'JQOJwb;D{&O`Ĵ>/ <|pp" NUB$I4IYB͋G>3E~|^%y agL/2n̽3қ+|6<ߕ#ͽ) &y>1F0HʑW$)1^+:P*?TjO-ZIKVjK;FXbR);Te1HsmjbVzl0ZĀhtd1G*b,,˕WZzjDX..Ev"a‹OSZ*cI$HJ vu Z#Vl_x~;8-p{T5ӓ=J^﾿ǫpf)2Oo3lҧv'ӥj?3 ?XSNtX(,ZPRM9stm%E3|tqZFlɹȝ \dz0UF$fW ِ,A jm2Uf-\6v[-mzgϋ4Xs`!Uc=(hXJfe-sAK .ml  8@(:)dk2d$}{28bͬ!BS"P68kvEj_ElZD"nEܨhK|1RkAg1.3HšEPܐ2:@5(Q%Jjgb4*t)`UuzB3q[/:]:Xg3-.Bc]vq>oڧdm1Fh0<oIO1R8z D4)UB>vvTa38{3k3ȝO1)olcӏtZ#a D?ד\WZ'h|g6? \cn4F3[b{u|'_ZW?Kahm(Y F1`,bXؓs;d럫|Y0(k1ne<4-'ib==ݭ,oMfI\`.{ԕ$I{<;/ˍOZ4!9 te<-R$ZeF'Q9.J<'W>aK*O"Orz^cGtQ#Lv\DU6k%z 2LVV҂.E 3 )Bz#H Np]4Ɓf'pCb };Pm%Bo= v{2G;84e>>,]#Gmⵖ/t\5O0>mV )EH%ZQJh N^] ޠ$"\%%J-meeVUI!fl_Y?@ dV>_{};q}SCqENUpQ?Hvl" ABɀ|Pv̓ji Cجa gF뇫(Kֲ${Zr4@@8J5ƒJ*))`iHs Uf!@.ٔ6dDGT$5Y֎d+qvȷZ4L.݉VsZE3jgr(Մ(r(IuJҚzʢZy3uAJ9"dl!ydCCEJ1:jt6?>%w邶.&JH>褥йBo;l x9E=ySʽթV|kL")jiлT2sJeSk5vBk>믕tzWXk8c gZz8։&+B6lA藕c/tq3it;͛f2\4fB2jZXk?gZO?n) kvrR 2s:r{=;ϗA4TP@ag%+R-~%,pC O ;rїѧ/?*o0]. ܮ|!>=i hj0)28n TgHksaj}&gUk-@(G(\}#_ɧF?]5WG}:OUo=S}I]_7nXЅe˽6Sߓ}0Iw}>^#f]ot=߫"M iͫUSf/O/̊);`ݻ/ ̮c$UsHF8G!ՆP?77/h].]/T_IEMpa>$>$}z̶}`Y%dWŃ Mv@6jQgRtP2P<3o5 ~\p 5[b{g?@mT/W'h[F3ܸ8+~}z F K<{5ڭwPg+AC]OQ C m sFmaYֺA;q8hzT:{L'.H؉7-Bsv!) [rHPV]>Ǯ%@ Mfȍbg mzH Jkc Z3j=f֛9nE|7%lw~C/=ܬW7s>y'o{[:^.}rp +P^5_6ݵŚ9f8뱜p1 έPwP+ ):Gώ$)DzIҡ本llDŃȐ9ڒd]cgK)t݅Q;O{o7QEКqQm`;AHNL $0FT ҺYzozZtq(dhPO"E=_/UBVK5@CЁeg) @`&yʲ,w)4^dfcJɛDTؓ-`[D=F<@Vy^,H,o-"}SĄ:xr\r\C^zC`(eE mfsshO;lP4KQIjN&qLtv]ۖ9U~sjkhz-S7E $H6],܅LZ;m4M>Q5uZjߞ BݾBKw/$~e\kp7mtK>k\1Ywm\I^0^Qoߨ@]^I3:l2:-GnHkDcOklRYQ*vִ:Zp1 /mu2IW_?xQ$𛋒JL;Wg.16,kĸǿΏ?ﻉu#7$!qxK+F60~ fz`rIr OVޢiyW6J 1z0:'nBMeu~0Yj'Nݵ"D~y ^vs!X<ւ2(AD*k|`@6*1hCP!GrH^;sL_euca +[Q!> Ox[݁r $' .OPMD#"YV^:X0NA  3ѯ:o5ݸ6^]Szvv(mLe^X48J }97*˜*]^gm̍v^|z1Mkn0N8yC DJ ٨H_6Eʆ^n#QݐU\3Nb{[҈?*tIO xKpHVWD^ݶGK嬽ު֎6ZBVӠ#ssGG 2t!xO>Pj:-EThhYV)tʠJ?뱂\>ݣXnkdtՃwe#zîG{|xj3](udnCֹia$]+¿y$'H=hu-$|> 7EO$Qrcm}0bMNuՅʹ< wn\)r11>]:& /)OSy\gܨ ^*l:,I5odlhARƊSejޝn[Rh9ĜUD$|Y"e+0gJ{̤_nKj f.P|Q&%9Y#p$s6dHK(aTH q6gn0V5$Y& 6$SLNd$FDb^Eo DeC&zCMmIf:(wZVBDn5=Gu|zҽ g듲fbOdYS([qW~l2"{D|gVr` l55x R3tک(0-wJmCUDFo^$>L:in aI i.AVFjGĶ&󄋳ZYYMKEY?`5gFZs%%&` 3; Ipʒ^c": _wiǮ_\lȵ{ ^amvAp}jۤ)Rc0NF* 5DGӎՇA~aKڡ0ȭ An%K0H.cf).Oƪ΋y#w!1ix +tLiWՂ*{d9 dw dUpJg srBPA#ʁC&z ~goc:sGͱW(>+zsglX%YVQL<'kyD uu4HbܡPVxB"puzDe@` yQXc= bJH)8= \& |Ymxӷ!S^$C7fiH]Lsyx&C%$^vg,&&"2&2:9%SR'gD^V e y7 "HCoMGf<tLB%vXevd5q6\6OAzKK"HaGㄡxL,dL`W swk,&]|EC뚩] ʑXTLLWJ1K]QR3}| \"EU%F5ݒrkj)Uz%S6Obȑ9( 52ޖ}.yi %{Wm ^vB;VY|d1_|@/uRe&twZZ5mNG}RaUP$KBw̍ 6& U"7 x=7K0/hɭV:k*ky@o{K/%-ViLGR:xeLd9s%3DqMe@O=|+y(z{4ݔpOآOѤqpyƃ9W&Kbj~M>ۻ׫_ǓqGqXf=Z#&{Nz69Z-~1:1,ZW8BU:8ޝٽv={#'PQzXr7닸Y담r)UnK_ҾgdL30>6˥$~(lQhޓ'D#%(JVH'B|&B8dzuG+>8{hM;'#} ڨcdȨU{'N= \Ϻ3/G;LnSih (I7E+jtV7ܔ. 4=o+_*YVϲ6]Pt!]hgku1-cZ^6ȱJvLc#ƃpL=zZ)U:؋xA꺳%*&'3c\kGΓᨤM6YmP: 1vk̠CҤ"1(,HB+'\fDn^0qZSw)l[64Wu7kjy 7E'xΞUW"K ꠬#t4,ze`9.`p6M)kuz0unn[oȿdz=&y>p^l$!hύV[}+旪A;^pjaTJ^& 2'E,4Y)uAYlTW‹P3(rKe,2k)U,p-",700ȵ9b5qsDBB&ludc"f5Z$^P䣧oC15 -.kNrA/rulmA[ A;,> !1>%ɐ'3t EICƻ1nVU]DRGJR$+CdVNJ\w%ui]z%#B*ŭALB>rTjwjҹ)(r>o^Pcp[*PHV'S.yg+jYl =],[TjBYUuf }e3o܍Ju|y7{~<2Იv5W_ ݷTOecS?|~$]*ͭ͟H7 `qEkGi-N?pI-L"̕c, ͂坦B5u{̎;gSqi:3h2a\+@eiwqX-O? ^xφa^6`7~u8m%E`"NDF $kND<%dB,'R2./Ѹc cx FJ,r<$<( OLh)P8 ]u+qcqe37'ͦ~$ONt U3ӛ!l3OQdI }< i A6qCsv -1lq;ᣑtyf69ڑA8(E*J6@%((:MW<ْt} |;8$H;L |Mnd6R439\z & o`Ms\uk7[l~憳p<i*c'e1陟xM'x2NIRf%0za!|}N߿sLIQݓ=M_"i'q|&_O.U_.a<~3A"TSa,^*ZP) D]:*r0ZMqgi׀eB@a3lB mcޓ&spIEX9HZq­ՊdH1PxOmbԒCx%9oLTcG '\EHb%=B9T\;uZazC:-簸>5w5do#vr۵̧i՝.X[ W=+RξnOr*[QßQ;o lygp|.3{; ];[IÍ~K/H91Z FuJc[V<#@Ys#/EHF'l#Cdύٔ&R"XiũsT%ZÉ{}i,ᵞC\;G(7.]t]A◞$~ oRS >(սnC)"ĵbRi1D+R SE? p;XYE?xg퇚/.(b1+i49㕦L#HZz8'4ᣱZ0 aQy "$jR7e95p5 vD1筨;s<7C*ǂ죧  z9vIGF\8>o(j#ԫ/*h筏bZZz3,I6vpXh1R16G%ko(TUʻ>M9Y\ƬD9&ZCKwV1XhHFj[MW8@,\pO,\QTmvS4 6ni\>> F篧̈-%F8XNUgZTJhSLխi(ƞOADMІ,.v̺,Ct&EckEnaZ31Oqǡ+6P`q )eRY8DuR`&`@zKBaWސ糭YiI*>咬hk=jR#f>LS$Pj֨/^Ǔ~p7 "VCQTFD!b[9>f*G6V[C Ajf*IL>8iBcUDt3$^(bgB JԨ1%-ov]ZFH:.Np^g5.9ee\;\q4gFZs%#Mbgb6 %IHt,pAvx \<<0<<6T8uFUoOd7wh p_O|%c3Eb+{k*AF6h4M<6RAjؔ@Ӳ扁:7ȯ Gu IH(NEz?8\kZ4[Nz!J0f T$oq|HN(AK2k 4$) )MG}Q2[>U괬N7'딮{JW|7L^ҵv iZZښ֝ѫwF$n7iHќ CЩ)[]Y-5bZ+ `C J:w K)fnU tLI,T i.BB+/|f$"{} ܐ0qRBrujhe{nfOg?|܇SqzC# Qrh#],d,9bd"t[GxR|4m ֻJb7距}B$wNߴ/J$5َe+C #kHBƻKrX]Ot5Kho62Ȍ,PoY‚֑sIy@AĺnA).<& &:ˬTQwژE \[GFn_X!7MM))ܚUMoh CS1_HT[*¯_!WRP+*tŠT)i\z6ְu\'dFzE]\bϓ$ ~2Wc6bwV-wZ.kZ5\.%`mT1X#G#Iv1 mJ5˂ܱl^`ov6oܢ ibm>h|/>*lΗ4~Hc6x_N5-+'foo5@63hBiu ҈hEJ);<9p)?q )|瓓| CB AF1D'7hȜy "n6."[_(}I 1Slo <귙UՍϗGjЀR*);@"Dt^&iHHz;D8 "^^q 00((]vsȨ=H"R@dT8;ܜ3\1/)}[a7ֹ凹?^yʟo8nOm^l]Soz;#6GC?i<!,DYpͤ;};..Z#cT~UDVJPv!R ~}?r+|J%ϟ>40X^~InJ,9Z V]Vg[Ϙz¯vtW?Hm>~;{^(>&o\9`FD%V,ha.~_4979sA#L¹5Ξ֯8Ǝ/l \n$dXo}]1r| "˕,"#9l%tUWāuy {3,3к3иyœIH=o5m柑|jF9N{;8\z/WozW>o}\#{j⧗ߍF6] x'ഴ[[jn421SR /L¨]wsw#?--:E2kJGhj0[ 9,> [dީPg 'uپS;+E5xHA%DtH9"Q9YQŜb)WA{rSe~zbHEt W+ٷj &-d2cj" $O֒k+%^6S݌?6ODw6ޛm"melzZY/p}쟝9~E>|{i0rfcF7|zޚnA1F7pTyͫ;ut?4?J9=Ô7%ACWeDK| DP8ikzX#6{0ʈQD J1u=33;MzaٽJ`pUtb !X֒KZ/a 'eab,>,&#Vj>5teun%<:kF N/DSCΆznɍYX`%j& X'N Ղյ>͞#٪tvav0 b0'R}5KDp4l'/Op10|Ԓ Z: 0fmHIQ`(U|4\zrf?son[wrS(r$d4>5iP (VI]FIǿQIox~?loW~ ջzW߽z{ qo/߾V` HFo L/v F?Mkiho4Ulz6w Ţƙ)wC}9< 9INW&xdp%u=n^AZ72Ubwj $CaG6 T#5B#1m]|_ZG$*%gO6~F&ɨEj&Iz݆ΫCyZ <9l4v)|5Ғ6%J%8/C,:JJG\LY| .G;O]Y(oiZtiZcW_bKIYOSߤ6s6 ^rÜ5Q [[cI"!tIVYm-! m.JfGh5 8"8\3#S-=WBTOLP]Q!J(n%V3cU >H <5s18M8`1wAZDl ha[0 ^ИsF6x^].,g cg>oosիg()Õ8^n4~5#^ pWQv{bEUAZ|w5 fNLJW HwD}, U w\ ,CzQVg}RUkKm W;XQQɿ;Wke-e` JKiƂJMsFS.vdٹ>6Kk̶>-mqaɅy"z6A՜LcJcF1k`)$Ťx4;I->jTM{YY6j ``2`A|tpDRy0su9`k UmJ#68R$#Ъ@SJx->cͼۜ EJ֔`҃Ol/';"ASY|;b=uBWhs) l&+0ͯ>Fx G\_/B[ ҉ٴW>͌ o7ӻ0|A;4뽴;f|<&)xI~NΫ#xM7V b_M^uǎMc\7fS]DXti@@D io0/5uGZ d4ykfW(q?R4A <,ө ?w/s,vo t|Yq&Wmْކ$u҄3Y;\ >Cɖ,hMndzjz} >L.@ _46¥CB[.F|oI4Zh a5>b5*8Q`a:ꌳ:#vVqS9:K Z+m)-4 T (0V":7"`i1ÂŻ N[%&_ݘ iܯK }AA+A0iLKBP.8g] :i.X>8g/8g] pv.8g] pv.8g]KM=m1j pv)[ pv.8gED[ mA-H $ڂD[h;jr*Hl-H $ڂD[h mA-Hv?\$ڂD[h mA-H:YGM!H5̰ܛ"Չ\HurJK/H5Wrhv2/,Zq c\'8 ώ9k˭#:F 8u8u΢zqr7>~,*8ʝ7\ w1qV{tHbaєћ%a*ץ-NVit7 UxC: j %u^U];զI]:4ܼ~> 7%ٱqی]܌9n^8nZg/Lk\/-Xz)Xe;<ٰ-;[3}T IqƷ`X`R̆i U5%'&Qb=Rv<] @";%Q^rc$C6L0ESNpNC)zq*K/Y2d&#h!x0k5f,`ZF &6MVHKD棌llCtRV%gGb1(?TQƈc6ME+8=)^D5Dã4 @D8b!!CRT  ]QQ8 $PN roN>sE݂NQa\[XǗ``_v:L6>yPuVa߻Ip:lm(?uenO6v__Jh2g[s_b,jY8D1DNâQP~G.'Y=lAtqjFd>ZҎPG RQ[ /x#32x93Tn͘5c>[%.ԅtCY].\P6Hzڛ61'7Y? F}bMqR{cIp$@6Pi ABRS>m;h3* Cr'ZK(Ij -Z J|;t$LE 9rEێ~8Ek 6g4R6L!&(xBJ[ NR}PSDn(eęȅ4`%ϧ0/ A к`8ـ'Nudևȹ[6N}*{>a+1;H',\g0Q9r:(,1\%:46xS$Y%ịUh4&PfTe F刊 FO Ut(J6rv֟x4M@{{+__v>Ean+gL%5Vh0z ,ȮeH3bm"#5Zeiٳ|<N] .L~=ADIL`{6%_6H} ^.<ǻ&^OOI'9xH4tqD<^!"LKY%46E ck$}~ѴJFs&i+:sAg`U}ߜanQ]wW˹Hcl2Q%C# $g\Kr14-L9$O=X9.yj䩷LmxJѥ.]:FWil=J;D*))WϨbljj$p).55{M]MϰP/}y< @?wI &uD9hkcN&zGh"[ϾFN)lZQcu:-} &LG\>l;C8}eSN>o&|m8F(\!r -%& JO[ET!b!TYqXqCaD)!E$̪-ixXH>)&L.DpCLD XYh68'NK@=Y/#N0it3,$9s^D"ItcTMBu p\[-aE 9B-T 1툷⭤R=M!wrD3xj+RzFo$fLZbC)j/Zb0ur,phsMÈ]}C%QG$[Ԟ  dБ8ՑH"$ L| NiC%DiEs)Y K\(Pʍ"VFudXDcZ)"V?"D4-#n8t¡_7tyؒե]f62[pe.EͻG'b>I9@ժW/ $hΐA֛s΢E'^vyGkfCG]j+D A5F+SA Xi7+\3ϕ~9xܔEI]Wper+hӑ EDh>6IuRMO b6*9Ρu1:/[os8vmeзWZв{Ҽm{EݝW z^jRo/\!ϪWn]*ȭYof=i /3yLՐL;IJ[?nbu|t3\EiGAA\21yL:[}n$Nbj/bP" a נ}˄BRH#Qe:4F")s!UG%#"`Ud  (P(KYn>$t_x_>0_1kA 1c ŽF-GJ= A摳4pC"Ƃ"qpwc O =z09  lО9#Pj E0aDaQG=2n6-$.XG穻Z춧o(+32,bNR=΢p80*0s$H$*%wQjba#UHcqMf>68' r4(QH64bLt&o{yܬ@lzoFn4_aux [Y쇁_û3],O ?G>{aY*҅(kEޯ+ή>{:SK*gIen0 Rpo'p7ft]6ՆW}M#0_+'( BxV}v6V"Kl-L23ˌ)p[ A b/AFڔ.KeE'yg&ӸW5R,#Ͼ6a-v?^ V$7a8FNFޖ ;[~1CvV`>][}P1q=9gU$X?&Rd'=:d'Ԫ-?.M2st*WFiO8X䟈.$R :_e ~ Dx^4 5alC4׃mi ̔2lr69nȅLYX+oUmȋyx]UY5Rbsk}j t]yׇlM\#=51ӋGO1ѥQs/i|_aJ߲0S / B.;#XmŕgZ#xҿYfiw4Az` (A4ټ3hUgôD*f*)aQŽq]Β !цͤ kZV EXUw1f@5(*-j:D4JMQt+b/yyH-]ȵT0GIٛ=c5(7X)EiӊW# {PH& i* cz0.0?mTkDo 7V許}9NոNr ~@£U8ړ'JG̊x`!Ll쌹\@kUȭE8ؿée&.˦ZGKqYdZq` y`hA@P KZRb+Rl a.*QGSxYG^Lj}K ma5`VI%33JضkK+gdpot?`n[:tWGBw|/b"lR`%a3ħ("//.jL^GXCM>= `# ˟[$< \Eq}A+сN0(X[ 袀L Q2 g 8s2W6xPIF{h)#[GUU]qs Q.|Y=ݢ85o=e?o.?{՛a=@Ql\̮'i|{0@п.oTz#2.%F=OT$['L?#vS Jr\UVcgW%CcW!2z1FC@Mj4,$7|;("ݰЍPj` t-u$޸\p464YrmA5((dQ np 7Ԑ:O|Ԗ1dyS*L`_VjB(kB֒i$Nj В|_fol az($.)NzT?*V՟KxKWd= .QQI?Lq%I9XR-UaX&z7NNp"W4uy6a1$$t=87ZSZBX{ i6Ba52ewVW?fKZ9ho;+zSU2rd&]f!AE@~2bM)3`ȳ?Y,Aѱb"F,bd1p BDj,.I} &.] Z(Qb3S獵G|]RK"awiJA]^r# $g\Kr16TCm({qS=-+3nK^QL9*HD=E?qc>,,wM@oӼ:}u*VA4Q9 [6۪%t[mmu~=CWlI\O\;.[V}9m7֠A6xfԨ[Mk>Vh_+ߥr.-ZǶm` } "q*(k r &{]+@cF70/ 'ox}1}CĭMF*G#Q6 snEK(LLCa@-^;ƒB6WKeI=Szjln2 : Oy/UOp?pzZAd{T{A?.z9}SN_RDTpHXT>(C JW3M t}u3;EV-2Lnbt6,ǥJA Jq`>Mmބh^/O836?at`dEnޔb]BmbnN_amm63MfnZ{bkN=oD#Z#`E"Y{Pn_{e@lj8Ú{zK2OY&>!e|O`u0pzñ' lN(aCRV<_/(f4tхI.ا5u#׿2ŗkwh4R_>dSDy->HTM*=e̹$3@i\o7߰}qǏp; 5c234kn15TOx5L.p}^cT98™k$t!)ݸ`>ZmY9etM}?QC#=_;rNdl$8Fj!;kɌaB4CcNBF߸윽B1qfFʳRXG@DD3{4zD@\1T;E=6.C:DcZ' #.>ɟCJ /`K3c@ooMS#%C9(G@DЈ&gqeiM6)6Eure+Cs O1RbXFἹɬjͨ13m3]NfjZ>RNKa'z02bPKٷ2u$~_r d6S(-/ քjH) At! #a}dC"gy/X' ͋͵K>dk ĒKf.`5E&tA,ڳdhZheJ*M ;Mȗ EUv&!3rI+r30س[S\D?d~"9̼:RF5_\Bt@mDP. @ :2bB$&=am1"LJI3SAS ԙ)D?퍬_ݠŊ4dY14T8hH;L'"B}~ Àu&^;cvw*.YmWw fLm3Ska&a@ /ge:˪dI.st>Sq ֥̐MbXtLGɓ]@N".B`) >@ E&rZgd^0P>DǩZ#.c,z@ }HDŹut<oĝ 11Jrj@#[3PdGFby3Fv1Ab;Ɵݟn/+=g.!Oa6J4`V>Yb-="DEXj@.B^,Z@mDebuj W5M pM?Q@R892EG"^bDqTrup#-;yEySla5è*O(ɍg ~֋\$X: XEژj[gc6#E?,[ WjңZ4AA %ɥ+ՍH-dn3"T[s!P?]-lp<T}ڔ,_Sރ 1;wuUA` E̋NM 0h\yN3 _̂B\Y<2čtXGɴh< 6$!-Km\U'KS`jCv&fԤބC\O](I[l$di'gSC ,OW""`Y46l˫{HC~ol 9 o9Wϟ}TnL"iƥm3o>A3Qw ]Oۛ-wϻ_B~^0!y7Lo8Ul&'Sŗt1M^h 82$@Hz N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@:]'j? krl y5N KVG (:N $ Y@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': Nu%́&'PrޯN i5N$ $Y@'"è N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@:Y'l{'=ݬ4z_73JJ2 {ݵzK$p߶{367o_[%Cb)77YIQ#Ip75憟[jf1mn<ud)y=##[r8[PFKl|v"rNெnc+Ay-ӡ+vkzlM+ZJd)ҕ790 Qo3./$,%~t㏣|s._{5n_|\yR/g=xƟG6?qh豯Rn?v+9Xu?|CB> X? ;v , 廗6ߏ lsYcr/d}iBH.v<mۗvQtqBEtyg7B7sx1YͫWB%/9gn^}{Ǵ]ah6 <;q{s탩Lhg_źf[jk.YTh#zTQ=EX++n=[k c+A Ud_]0[ ] _U>mNW2cS_պ}^1E1#Fs%:E>VDWzԕM+eJNWc]MpAmOLWMi 0|dt+]}񵡴!gޜʔ{ =!+i&ϫiVCӂ6ciOm$̊ +} |teV(]]٧ѕl,EWc+AiYcvƮp^ϚM+AJP#tu:tGD !y5 Fg彇JWCW!'b))\i6ӡd(5G'׭Sy%C U6>5|{apz7ahs8vDۡ+䘞+yOvuK퇡OBt啮>w;3ǚ!ρ$cۻ=vhSi&-ciAHii}ro :1h.nZpQ%9/x<;=6P7)+U^ͥO0YɠLqz~gUxMm[v* w`[Tol9MZqFws81דTKg3=Ϸ\.D^=ib7M}__G@`N˕ϒdYYx2Wٔ|WJkE.KQz8~t_9W`LX,&{J`|pjGUU=llL)*٘"V'rmT6HCr<+S#fopz}նxPWy3hQ&z,ʞ\srI3}s= w7AwH4"+a@cN1'F?h{ er8'A|0=[ DV_ʾ2RE{?SPAdVs4_6+͇v}vA1_ԣlffMTŰ-{7~85{urS3f~69% "R0~8Kȿ8*xp_C4|cvvF 5<4<Qu"~LEzTn u^oUlW'7OhH[4eJ$Xʙ#^$'!2ul褎+L0dV G*s`! A}DL"sNx6ZN>pʰ)g#dPQ.sۥaF$e㑆jə(CYPU) q%Y2^&PQVLDld06j(q9@dQe+o =Q2ÅC*c-jk7T|W&VS8yur3?s&{/+ĉwxWy?v1Oo'6(){I'а?,rlgK\؆|O$sC.cV<uʉ;+1Xa"qZEFmMN)J[[NЃFL.xlLdJL 62&SZ*aakq$O[m uXXXm~]f: {v;lo2T~8fѸѢ0k.,YU3uQ2y}'4SUVdI8{>q&B% ߎYt(DX4enMƣb jǢ6Сv`wiYI-S!ڔ @N6nM - ^yCN϶U+^i$&uiulmf?xAX5 ?AF/lh!VRTX6n'4;mt i :iGGy,O2sL)es kHԛ鰲7 i$'iʫA$Qr#a`CB:!^rV t$oS5Zխs4iLnOh#|j'BO]WZ_+ +*_/<>IiRZЂv+!b)Ȥ$=vPd  it@SPHoѩ' !dU“QCiMgtrXz.YڏG6貐q@ie)cs+8 \R x=RP> 1T08%,RE"pژyq m5q=J2e$NJԎeIh`{6T'nN|YrPx:jpǯY_~Q~}}HJ,,I*(a9IDeb1TBa1)ё2w(Pty0t ,t%ϨLѥ.ɨ U@̥QKnAv>ZY1 )^9+Mz3#&/ߕZr쏧o+x,/x LF#BN :JcDLea>@ެ%O#[CC]kƳlPzȊCIFrd_y@dL.?4yDB}>A&8fx|C{4OP#g`5y=9+vYr3LfS@Q:Y}kVX~ާ[lPq(dhPEa)> _jċ/2}Z%,}0^p`KiU ˂7LYxH\-y֪2u?pnpڝqELrҮW]W&Qi4E87LB~,8/8￞ ]Q2vk0Vɔv3Կ'+?y 쌃9Sg l_1gcF׿^WR7׌8 ߱崊Њ^z}=H~$DCT}HˉAbښ6_A%Im@ª<$w>pRU1*"uHP"("g򮘍ۛqkz:\%P/e;3azLJhnǏ{[(=0A:s24k1*M>l}1Of4[3n7k|#4pܺ=$|CyQo6zOݘ_&_clѡV8|Oom1MLS$̍U'n8j#]0G&i׳F&[6caZ_.p*xs(Q1 21VrN! (J ;8$J.ɄB8MaT%t2u[6{] nk! [Ug(I} ib9ˣUD#sYtY}c>޸s.`9nDZl;kHE>ג)E 0W=2R} 1΁/—D6WVIa@^܄6])b+t@Hm <j:*P3o?;ؗnC6nӇPd3x/IO> ֿ \|T[jmɧFJ?v+uv\I <]Y J9x%(gR܀b9x">^Br-Ff*O´ڦh]M _?XMJW*샪Y|5@ϟeKT6&6dգx5 !CR4ijd@D2.]|TrѫT|TrKCloV+(oFJ,1}񊖱f@*9'{[uaLWor|h0%9 fX' V`a:ꌳ:#zG٦TZWݝϵ?f~A=n]a7-5É\f`KuR=q|Oe=ַRvdaf]7jnAmI6%EDfŪ(Y+ >ZYK0X`"k"N,GX5u[@7Ǐ#Iٹt|cjo]ZupoœĄќPܸ *y9Q E%r>Mx:#. ""pMX+$CŸs|η%o ٲfX%eP0F`COӷg' ,:fll殳a(R?;M)A`dh^jt3M-V`hf5\JA)n`pe_ u>[  N[lB\:2Qr)GKrȑ(#ˋAJH#RqөKvmJ)(9|m^Blx*쬒OmtY[9(ܭM2fF1L4;^OGkkyDOog*f$|q YLR|R,&zzj408{%hsAZ7VE^P=,HY篓`&1Kpu8TzبFE8s_z|ſ~~\8O߃ f`E&ះ A;]kTߢk6]7:?/G!p,-ep(Mk>_GS,R}7Nj e _a_U1g)쪉UcUcw<>]@jWʋJx4Vz/y2V[Rt s/<`IIT K5(Ƒ-&LGQsōRMzy/9=*NPʘ]`FKȨϕD\HȝNr7{C%lo/}ovZPuܹ7ZջmG`h$;G׳q6֜ /aN(Esfͭ$G!CrH םG|085$`㜊`p͌L,V4XIʔXg^;U@_t 0< E-h`0,"MhREbA/ƦH#i> OƁOf7hs~2BjBW}N@ZݸSb^R|9+ƙOYaZiEnsF,ϕ".`q-jw^ ,掽׌{/ZW QTAwGN(Q`hp J.jiu#Vu4D{ ZU=z̝vз[B7zf!% 3Bs46 6Fqf.H{.-=%\XϽCǛ/G}Lr!݆lg!MT{n]V^_=JQCȁlmEȕ+ȁZds'*s\Sf9W.rDO* \rx:qd8"I)  c8o6Y*D3,~[db6☷N EO/>j|A`"WY(x%W2_hqG沘Lg#7]&b q.`9nUv X"k7 c5+A+CVΟjeKط&n~8~gaUY2~)η~g[@<5( !m.'}G-By{EK}LS5̣!rRHWU]W;W [.x=IGt:m~iJiQaSoE!X2(~O*H)rݍt6fQ͑&5,ѩ8zR#$NΏ v'G4 OpаV$E$\> 󶊫ZAӻU] .8ޮ?0S^yxG;QD|fׯ/{B^sJ^-&txȋuk?ؠcx:{w )ߎڏ엨[.h::78]? Ml5 ;G2vH'Q;QBfSY~a+4`}uM'z~9d=7zqFuP{Gn(ta#u`YB͎a4+^#9-ۥ6:։./6.Ow?O?_O~wlϛw8'`Eh,?&<@ۛo?t0.`n3tӻ^=ZZq6яۓqZiv/?L_4NN'jS\ΚT4z f~IKRT]*@:ɽ  k?0>&CϦk,6[b_ȏ;q$Zyd$m!K5,V̅"fƊ&`4_訃8S/6\?/!buN\Yg\eЯFkȷ׻2Om/,w IwCU@ygk7][?nUCz=;zCN`g {|BNgn&S{ y'{&)U' .h|כ=Iw־F >ES2&kN B9$a#R1`(v7#]1J$Y&TTg\.18j&~pt W~b-xݜ5tq3W~s~OLAGz$8@>zsox1Y\K ]JͰ; 51u˘~*I؝ٹi@9&tFEIR|X-7 Z!YhȽ#,홫R@Ĝ(*E*KFb& 5f:9L7f Y2 쇗$c6cWdTf 146$A.k3q3ו_> *NJq?boYP=ʻS՗Glxa#`Ӿ `z} yR6(jamJq`V-n׳ܝMwxqc9{)6ޝELFg__\f{/Ϸڊ (ol1T >hE d"vBh]>-2S!i-qw}ořn\v yhW3_J`Vq6/$V \EeT"hbdD*$[vO59-j]`vYny& chShQ_,>9Ҁh)]@aH]tTy@zI5[~_mhS XxՆ<Zg^V|V5 Fa13DFKEc-g"5 |U*AD%wQRlU 7/3(&RFۦP/2H+}70;AM)oD2R.Ϊ )+5g?Իؙ{:V|?9D!߀HeoCM"IIi2BQUT~ba@尷]ԯ!`)c)R%SSTdB3F}(Dm}l.P ;Sk%F|p-zLCL Z TP%) P b 2l O)P+@~6fCNr ,P' ZgW!k%H&O*ѩbR! 6v@6vFICmg8u#uAr3-ch!{m AӇMcz7kźO_ثxc[z IHx)3r:A!LlӥHk`ݓvźRJjbz1m40K{HZ}>hI3 bVGwJ{O>gtGu*vָ%CLv=']y+X_ߏbݕM.QZ70RUD|Wm3{~zA,4TΨ>v-v3^z*%P9;?NR9pfyዙNZpٵHN2бJJ%*&Hu4aP2peฃauVR D%ĤS.j Zҥ$ZLBXW۝U!|OD1 D9cNcƐTE|̋%Ac\~3qneRMH5/5!u}H&viSYK`Ðmxڪj:/dცmc (!-kY4uq<>Fyz~(ɽj_E%mCy߂fGȓ]뵏撳če\s8f ~-_ʖ^yGlDT,1*Ve,)zgUI,"2qɘxNg3:%:]Z"GJ1); ;֞8mUfqO_e޷ o {[|vd" :'d!D˵bc͋ٛضb1TY ҪM<Ls;(W6uہVq6Pv3ז`ox4 TD3(U-WnqK&=8,<2| ɳ$S_<L;Gpa묰! :Ƚ/" ~ȝKo7$k?Of t`:f-s=/k^{}U| j|L:wwjy4^=ZŗnZN?m|^vpӴGk|ߒF'Oا AJl%b(Q$ :"iÇwNmޥ M)Pq06ƻBoH;EwT0:6Zc#H5EdU`K몼#6('>PKy#GvЅ^Pϰ{p ˊRVZ+(}PaF|J jΊPZRMmK\b\?k_4I|9㠟,}x䥱;u_s&Z`%Nv**Ұ\ J@ 8$e(ND8 /oqCC`P6p8e% =/UqT.]Y 0 +Q>feU)-=3/Q~XL޿;`%x~N}xr Ǒ%Z[2FuZ{@Z+ HTƇڸ2W%OfGrJIyqS.Tqo V՜(.l@؀{Q˚sY[frǃ=z?t;&G ]J 5VS҂.Vs4k)LtlU<"4 S=c1&4zHTIѱ-IEit \t|csc<ԡznBY*`-0eM)J r]?-Z֪C&ocgn<<ۿv~WL.ku6Yv#TVWڍ*r~y|Vl>#'__5<[,B֑Ws?a5qY= jy6ah~g_9 nFjĥ+ٿ- =f͏Z5/?Q-bLd9dGV?6W3]]M/?9po7!%&q% =f~A]՞޾q_l(p#iUGB r?!RJ}/t|kFFX YʊO^ݜAO~` ͗b)moVYYZg56Ì~UVт?[7~hn?Eue1-fb6~_O;Tzh~ydV_;}c~i+= oՒΑS&8q#ckg8N?. Zsu-l{W 9%:}ghCd3сy,#@⠸y_1vAB]\M?}Ռ@sن}wßr($^0 a|?~2|p{ut3g[~v,1/Du]_TsSWjybMT{n'N,ac}G9qMܗ$hs%mz({nA?x]Tm u7L>-8h[Ds%q5&5IKo8hKi^—N֮g]SfV܏:tLG۩-vvUUeBBh]\x,ٮH}vE*KٮJ3 W(+ H ̿"R$\ WRHupX4DI}6j\ W(XW$WG3BI0 WĕFHH'S;=yJw\JP W/W#MgW3 ܏Vr5c⪝ZR{+vg \ Wmz Lw3wAsv{~ӕf~v>6N;2+LfcO7.`YwssemcBK|J*Q&LsJ IiRD1-=0  P.0 HmWFp;**+)I$NW$WXpEjOt\J˜p5\tFd 0\4B¹XpjACq* Wĕp.&\)PLhpErm4kWsw\J֮+ ƲvDIgj Ht,j2p g8qr,\\jw\Jk+AW$XpEjeARg+1(  HfԪϮHI;݈iƄWw|J.g[\Sչv*gƠk+pئ@^W9ƜfJJ=ߢtdT5=[~3լaMX`gNmjR/ir.9p7 Ql\WYb'!WJSa.iڙ^80i |=sJڃry^w*&vF*:@k$̖<=7IW)&|P?RIoW2kDU3 ro(Tp FFʤF>m#"uʒjK) sWTuN}RߔΤyu?Uu uTC )Qz Oke01V>d^7giAչ`ϋ` /|.lU ђk \D@reiԂ9ș +" 6 P_ %};tp,&W(XvNfw\J`,jVpW$Gc Z+R+e18T4"ł+RmqE*]Z$3ꦃ^byuyu-v9҈VQd|[)Mg]̚4n?>{;z~4Y*wZRׇ,& }{}!_A>NrƝAHپx̯VyXmHq6/ ~ݩȫׯ>Wp>zJΉY*AN g$`۾WQ[UAh÷^m^o4/' uzBt'ki@a$WvqẃgՆ_w7Gfgb~sgk c@{uzEy#i+#[ VD3U#ejuWBکi!lS5c<"\`E#7˒0w\J+k-Phݞs*w\J+1m3`ǣ<^;"*/Wkz`#n›pNS6j)tVjud T)FZ_p}X1sc # +-kSogqaˈ0MrӨ8Ӥg σi~ͮ` vөZY%*jBsZ<\ڎZ4,jR 8&c; ($t\J+Y3>W+ɅhfW#O*K dDBYj'GiV0w\4$,Gc%$8A],"lRtcr)# $Fs ϮHH^MGL NU;U+;^jVtvN%+  Wmzugxu&?&̈KeV=ۢB1a"4 6*L\'b4]mokv σiPҴ=*^"ɥw_S/c\@F֪e57)\[aIz<fk7l,k0R#Ƙv0Ll <hVBIJ( L;ݧOh'xVQ&\I#XD¦! $h" 6,l`Z+R iv5D\)i#fp g($\ZQV*-[|gXL+*\Z+R3!j82 &\\b\WRʂ&v,!v!\Z{cU*jr91ghI=g%\\^"'qJ]{:^jv챕J[|W"MϹ5A υ-ZH1e@mQoem츊$WX0͕;&'LB #c•В  P.*Q*j\+r(j8aG+L4":"&\ W_V$.\Z+DqE*K } xl;ǂ+Rzŀ*w-J[i{+F0Hn![kWH!X#: ZR;u;Zftqes:&\YhpErWz?"*d"ΪpΓ +Y,"+Rpp%lz9bhV4+Ǐ;U|RTֳ9dcsÝi~q1lȀJr{O7]'hU1OV2:NؓW)5N)"J Цn=ϐԪew{cV 28ƄWpN,ZZ@RhHfiՂ}_Z .-Hcޕd_!~sSʓyr+``? `?L?F”rnw$Yj$t{W< %2d rAt5[pb h:+6BWHWhIt,6-CG[Wᄮc~Q6m~\r~q1F &_ C]`~ W~W^l?AQPPewQYo?{H('nmnw_ؔM}Dypԛk *]+ޙ[=nh͊!|o\m*ϮCggObKkt_nfn|S)^GJQWjwn>0UU~<9U 'wOw'L><!M}UvuBm痵.'{Tjz^嘬8l)) lVL }kfuN)4j̞01*UeV[rT +qTgX; SMV Ғd?4s7ʾuUQ*buXGx|;t*dՂ1'ƄI)ַ͉ZCj(VJ&ׂňHc5NWJ=hR$ZLm01~l!͛7.!jwkKͰJ[E~{:1dZ)ʕ<@{`ʰ6&]QJ'M9w,Fa?d@.\ʚT{}b0fgkZC7@ghF&]m^~* b Fڋ#U1'Bk'9ΛhB^Jj[K9JJVĠs )'o}o50'HcvSt-vf8mI!ak4DSJM‚kivNVCK ֪O"VQ!pզm]"FUy,92bJ1U)PV]|G=uk,sS2`w"R ]0,)-D;*T{j }m.5SGߎ`ƤU*) ,ޢ %,v**6(:j+Zاøuh;gڴVsTZJV-TjպJ 8בnu45**өPMM'4Idd0*zbTR ++eq2M!P ! @HhPET&TD;-Z!3|kڃ A'bςGܬ ӄBl nKI0,'D5 3A@7J ޛk(lD]`)MqVo֙eAUѾ`h|U ukcwebU.k UuA"VZ.=Bj<׍!!199oʤD(X( ev5!ڰ:4]k?V|p.hҴ弍A/wnf1.UUq\1ќ 4p U86ʐvND0Bmjydrzۮ:=a(| TA7z|oM/;&>MDŽ1*4ʳPLdW%S0+:\e ,<9? z/,,tF]8ǰ) >@&=2:.epȚ[CNzOy L}򘾬$kdL5eL<o 7.pu~V% 9ՠj%QwV5h@C^s` Ѩ/wK|̡c8#tϦ_I0SaʠvКf6CܖSelJRjhNJ@A ׎xP(vm!Y5Mh+{t-Ac9, qQ:fW`m4JgVI5.fWbd~ȃR!*888"vVWxV5CYeXJ 1, -flg'JZbM;ҸB:b:4IyF6@ pf% MڀJVdKoU5^pIf * ;5}+CKp05^U;Z@4Xu0=7t>__—b{|s3IvתA[)n[m#FOʔ=֡8] TmQۆiMEZS{I0вzh "aӫa#f'+q  E!9)n?TFmU&ڛY7%\IbNv LK'4C)wkz ƮXB_߁}E8(X;1|rӐpM9C`g|AW,xBa r{Xzg\oH۷g/_/Y1e+ܼx{߮%tժxvG 0]On ׎@ڿ[ k&iyOOUeE%ZNp6OY=$ &}I@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $INa%}"_Pp#-& q'% tIh$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IlmXPyA 4z^Jh-}h4A@'qJH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ tI m~̛Ua1IݫGJ$ tI |$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@Oˣϫ~GMǨw7]Zsyw؃qI%8o\\\@G\(atϖ]%,8XZ ] BW@ |t5P:Ar6x2 8BW-@i ҕwyIt5/],BW@ zt5P(tutmhI׮`kW[ ] ;] r$*zh%Gu5*J3?p;t(A33jU9C_&h?v(o9!ӡ+, ]] -L~jd1HWp ؙWW >n|sWB3ǡ#SWte>Kl[ ] .haNWBWHWQk}՟pFv6&O?blQq۞_v徺N5o~e}ٿnW j]{o*qZ|?hO8M`iwU pzْ-kKAei;;$g~r9Y:y^+k" C#~;K;Z1Ejc9iy1PNyWLM>kmFU}ޜ-(ڌ-2m|&zG'OXd=U7`Y_j@R4YU@ge[Ƣc\gR?bZAJs2N+Kֱl<SD`[0y lQsJ9x$(cR2'"Sx2\Y̘C:,;#V^Ҷ؄N^r쬅^ $cLK21"1BGW]|4-=e1Kޮ{p%\Ì񣁫XW[u pQk]FRPyL90i[~f0qp W4\%UbW pU[v8ETKډauWݸ{q7.偭;h]ӆpShzc,taFƔ>۝& L'v4pKun\2ki*YJ$NI) )5(eg$l$JZb=%X[!8$S{q4Z::ba9d%g2)h>-Q|^v+``+aE5fqN 4]̈}8wW]}#}`,e:X1Tf{c1r_KS J \:zZT ³E%ps+oO94_o*Rts{_dPcUJ$oi٭ձd;# VNolx?ۯ)#[]lݾO^ \8 ²-tܨn7HY H$ L| NiC%D SW{n0Z3L,q >WQsNM[` ؔ![avNQ.wya=9&r6ȹQjEw9lߕrX+"D4Ufjgv\~ϗ)fI<?;uto bQ_* auͤ(ekG'yVBS*޽,cW[t*k(V%SmI֐I-c%Ws.fqMl JО]]k;m&`P^-N!;GZRe~z59Bb篚Zngj -zpz*{+ ~p2߀ê=qrpr3zAuję (NU^YQp1Eߜ_ Rh/DeFy }.a s{˗:eӕ Ԭ&;0A ^w> HeDzGp?HjrD`xGtƮn^H\2Ն~^cGi%dqqOvUG߄74^, A3 X,j<\3yaupް> oX753Ơu4ఐ4q!fPpz*pk5QoX`B<@>[X1c2b=6MVHKDpc< WVfX70fMpr!(g-·ʛrĬ|z}t= oO+j5V2w$~҉*O ndP:USlZ~Мyy+5+P2Y-ۓyP^~xlAy0*{n1ӣž2E"14w9m:MH\Vs12*.vH73B0#BXYL^jʈhALJS4`pHo3DclY|S JЏ Q ф@ 8o/]xSɒQtx_[ygPM=A1c Ǝ #G% Y8!)cA`XlY!r--0`*I P@4\A{88Fm$Lrf0`0kd =!"}:)RRDpC@,s$kzI_c< Bx9K]l .*7s• T~>5<FCde(FρXFEFbDQ.]R[C{Bi <O1_=-|s&OA 7xiP%0-:0lQ ;iĘF%4L:Vy<\7n|yRu3r!xL+^OtV1't (bp#% %5"a667zuմؠMz=}l^jēׅox{F-Ռe]s3CއS䜗ct⬇K<<͂0ѰP>%wzy ?W)/ÊHkJixƽnKBb? |9 /fػ21:%XSLU#q׏J(> OytWT0 ??WM&}yEK*cI23c""'CX-QR oJ]6jí~^O4!_^JDiaG>=Oލ0-ߪotQnj;0d /%Qx) RR]s"6%Kӌs=IK :Wsap[~<]v+R 0rZλM|-3^ݙM+F[קk@A=BLʴNs"U_x2SR^gFҤ(Q:HnjѨ2eɜ9E~Oz;*"0HmVSqi-#X 땙> \ e|f2X_vV`92.ypVd%ckSz߆q<0%gB0./}`j3R3pqK >3Ku,cCmYE֜Ը[NŤGJl3#4YrM#xc-u͆R0(`:q %\:iKkSkfף [!rF3X04L`D$| z6 }"\74H\P8H8!RZhj"fUnZH8*\5׮L[zU %h`&8PH-VGrXQB2y5ib:*o3[[oYum=7Z.r %_5||йYbWܛ5mɿ!aRg%9p`IP˔D>X4 Qm}9?>?C;?Q-LDٕ >n0{HR%1xRNgk vmW#d&>4L{FF~H<>aaY,?SE\Wb/&g7ގ9>_?19%׏ӣ.&nusENkx27`ϋT͊atQFU]#91Ç:x⡖yƓQ_2;~~{O'^kY֑$e x0_CϪ[ -ԛ t~\#T)YN[?n̎rM$0zwO<?$MvCҁ*b w-X H Dj^;ذPy]/=PZR!8nD4Jx':Z)Nd 4b;a.Z g:Mt lOl '[.U ';ʳ 8b{Y*5(D_]f#[$#ӄl;uv3r7kCj"3-Nɉ|G)W_vf`A 2VrAJQ3[0~0yUsOٙq]^nliMJ,uR)JI {`+dWЪGa틖RHY@(dѢ U:{ $֌Zd5{jKP>Kv%ɘI>|:hch%([(=fcmt xWQ$MB(PK|⍾?I8;8gH>̌;p `wu`쓞}pݢ7`Hi-L C6 9jh#m8ѤCڈ hr, XHZKF-p0Z'nr| R (](Zu:b1RPLd3L,ӥH d^H[AduXw$SݖOgF}>﫛g2`VG߮HU׽ٻuqryr%!&?]'BjNx/Gly zwz$[+r+6U9/;`3h@gO̴g],0>~ѓ6-RlvER*YU4v;*L]i:ҶQn ;#r)D!չ6*ĚMk@:[[씧gb(YS+Ĉ%ĜB^jMRwE)CC&Zm|&֌ OCY3s Zz;/vŕ"0-˥I)g|6Y d(Ϸ H, Y_$Z-W/,'2P]L# S`-hrt':9 *}S9,./o,l}*y9Xy_v1{:9C`z=lFwkո"MƇǶQ5ֈjЈF|S1`JX:FmL Nj" ,k-SSQ afߘY)VꙘcBMt@{J;NTZ"yUG֋n.\l%EX/ d R8SrX+-O1`"%*S4KEbm%QzqzPa38}Ӈ;Pa%kotF1!Cp$ ُ{~t [r}3}*`J߰J7Rq Cl_w)/s7-0_Av ђ뢒r?Vr Y^Vm'?ms^$ zPFY_BYɨ@P.Kձz왜v4/pdP]lՏ]H֔tv|{m: 5"yBi}*ٺbYDB{fYR`%]C}[>/M%Tqt@& D`RX(H2B`*I>mFhxuqq]ց,Y[=zܧC l?0BaL0ZVaZ=V000g݌R wuugu:*:,uB]A]mu|l$\ ރ8sVeNFVޒ͐ Sz$zLFg/<ݚy5|NK#N?Qx^Y&s[kb|=aŗ¤[3̴|]L% &f4<&쟌mtJc%{^4i'IglW<:\JcMV?Ǘ4k!slڬ%8'?׿F/SXiR/ c+N<+/^:pܻzYiDL@ts tee\^'H3híof!Z|No;r>Eo ;~%L1B r]BTW:2ֺgj{۸_am臜=@{Em/C[R%9iR$˶V-; H.wf89|GbejoZ^AqIxhׂ.BiyQFأ1^uހ`*9RDys<1& \ˤD+k!3~K(qT@]!"I-c8 '(٢XA@LyƯE?/Nig>rF'-@0ֳlps<] } 8'08NFQa/Ge¬* {w֙h1~IbЂ3Vų+o ސfzhr~aP`V/>a=YP_T|g5go_/4k\P8o5'ˋ" )36(Ob"sǷWNuJqyDz '2/{U΢|SEދ_Z'#ِ͝uL _4s:K.1|71̦jL98pq5vCrڱX9R2ӅlEZgZ(' k'^+F/K`HfWcIҚlNeXTB ۢ_xs?Jy׎qӣ'=74{Fia4φ:-wWB{kͱSa/ UuH@C`]+:54 ;]FgkY2VQCϰ<ōi ;Ǎ4ᣱ&A 6*b"AdTX1K7É^ז XB37*'*(w6reAHӣͻC9T6`uԷ~|wɡƬB67?/yo>Ҝi2\bɥ@ A5 5hvJ;44}&gۃ]:?eQkiV_yY[b9]p8yfNdyZOiQ#:u㡜8oق$G]L!s E gk r/Ѓ}tgJ: 3HT)87>R PXH<j*ˌayV%&I(ݩ)&Nr1{_ȴ—0e L>( BZ6y &PÃM.RJ1\`H)@ wێνp:c!vhJ~VpsËoy"|,?|,A;3 ɌS(^R / p֧r4 ft(@̑gHǜ!AH>&.HgLiB;Cb`c3f,>뉍QW̳\J8.p $I[UI$% 5Ƅļi Omuga_~l8;-rrTjwUR8%C } {KDy%,wVz=S-ug3m>&컅0|Y{s7kYw곐SkX2q} Ԥ2#ʜeK O[»W=Ϫ:2kt+̭NnZMY{W]vTw R6s-ir."1n\9ygخVix|] rE+%7d4Pv"td~TBC=ox֦V:mE͕d]Z{Wf,ٴp|n2D_En6"l\tTsbxF4Xk$#g#W#5*|ͫBZ{0^E]y#B5t7'glxQ"K*q,P'`o5Q KjaTyn. GUs(Wڪ~Ȕ)6>vJH:v]h`E͌X`l$@ ã-816{?}y{{(N-vOV029IF 葲"'!-\`< { V}Rݕ=Me=8p{c89"0i5NxMђ(ZI[ΎriM]ڕpJgQ(Vg4acqyMlئɗW`ox,7O^ᤳr 8 Be*"Rtpȉ2@HO@O󷦧+57 ޯ>}_τReN4x)'2ԝ\#z?X~p"nqq5iQP@ /`}8ʄɹVSNt ~d:CxyC[̯-~E .Kyk ^Zv7~ƁjeƮ2"U*7UV?]OW?*RDRfB'aRdc^NaeIL+'.ParvF{cBݸy^]MB Coz\ܭT *wue ?]~}woP;^ٛ^,'M]0]a0M[Vޢin.Mzޮ$[nh:Dnfa9W yN2~;y -׳&"Av.#`Q/< )Nuw2V^Yw-2V&$rώҎ4; KB'vZ5MHc۝݌ў~ʘωKd w!`A']퓕Fu&e, ‘\"Jo$#VrkCQ!J 2Ysc5s6%#}NMԈ]tZ9\^y_6s-.ۋAyߝ#Yn(7ܺʳƭv&lXl1Ƴ6-  / _ 2y2ģУ(7Y')e@]&!0!:Pk88h.Lk Dt4[lݶBG3kxKodw8ٝ/?bܞ`qy;jJ|ط2]] [VӃXB3֯q >%m0XZ8$0Yʫ`bhUEJXV5*d[Hri#5:va^)QBQO;cNf Q.ԚKc6tJ _޼O&j#b1JccjB$-I8N:1m*Q羳Щ  %NbYCUm &IWbjs3qwXO|7=5-k~;y}x*+89+yGa|_xAKA]$ .^tgoTt&"āNWCA.d,8 '1s*#{.YSؙ*TWRiezYP"_jHF I cCA^v&(D, z-3s/'g2<'!8)P 5D"H :TPS-UCnm;ԇ.`)c)G q@JL"JP6qY LIyb|6#}o.Sbt WOxD"ポ[Fv[3zNzw:MM[^!fݹݍyҷ|ƹl] -f{%Wsw3eZċ.*06s-r)%̒ [,ݎ~'ٻ-mG[v/Ӎ*ΈB(bTTKWiB{U)lUZ?9Ĉh b2 J%ǔ%(%8hH)I%ĚZ'5=up3p8!v䈄:ivir.EIC1E +TP7U(Tf;&'$Iw?繇hpj]i(d4UݠUYfԉā2{)9gz[P?n)^u#~㾐E7ӓ'EJO}߻Ϯ>5V!0{|9Y'g=OUrqmCO5؜Lxg\?Ʀy#5EH )B2P"“ry&ə"Rd*+S9}hR(%QW̕ ttX,4:Tkdl&G*Ͱf< o[mOepK,?.b y{q&A/.&.^gl.3GB<[gTM1 kmha f43m +H0`/*%PPaSYߘ.vPK6|kVlF4[!澠v3x*jƨ-8!";@$u]Rs:ƚ"[oϖG\EL{TxLxYe "6ӏ""4FDq@ĵ(CY d*ieYNI[ᣍhPBgA^Z榈:ͱ1dH4PřTR-Z9TtRz5_n+q6#SG¸i<uc\T.61X) 9;GJm`O 8Vgr>#HϿf 5$wʹc_P? waJVpm?Gn\ m̧ݼ 9?op 2GE%F](4gA~~ D4.iI&n͎ 3ndgެao* kY7T}/Ewc7Gr(UH+  &k%ZXdVHur!kS ̑wng}Y/qG"#tw?o&H/uuI=}[ݻisW1b%qڷ*-K-t(괈 1{9 l{JV.rxU]WDZ(*hn LEYe*ʡ6mڴS{?p|} ^L^wݎhFȵM'gU2VGh}׵s?}bL[Ttܺݼy8 ^Y\w-;n,ʰ}A77w8EϷZׇʫ|7oUy.:ަnjܻXlR/n+_f%wo0WY#!6Ko;X'7wseY,|N`Q!&WozzjyҮsjGWgy G/~Xsm5}Vhs*N;Nb}){в*,//7?]5WNW833wΘI<5b脕f~mp)@f_&鿘g7rt|'g=O]d}-[<c :h{^7ף^[J}}|?۷haoFcWg}ܙYUzR#``ς~ol\}Q]|LY'þiRn-j&fS kz]wߏοc[yo&9|n\#`rݨ W.VLc>Y/6l>2Ossư;r#5a>m`X~cٿ^e`@L*OP[zFG3TD5*f3}ȍdq/Eo܄$?=PgؕS'|v𖯟Sn] l{Ľ!gba{ڽG_c9OZwsyMEqg7*P!Yh<_&bڙorfrr6âxshx0v.$8OA%(f)(>N)7Lœ\ jlvkl5Wo.s m<) r4y")w/)ޡ XF35Ny]2P'qmeq~b{ZNNt/*jV6EUu*4Z^ ˍκ,RZI͍BR|j׻gسM⡼F1۳L7t =$Wk7(}v{^~_E{I_RwtV[0!,/` ?\KE9a_?GOw,4y:jp1j,jPtL eRpmG׋#Ч,ίO~E^{"djmic|]Sdzƅo^0cG)3ݧT8=λo&PbvrRvJqyf)5=9](c[R 7>—c Q:0EiR f-#j. =fXv=$K8;Ft4$.MTE΀J^A2v-KV~h90(Th*nOV/gWo!589J#y)A!z gLQ8H<;Ux[_e\ jr| ZUuh"nNvm[g I-E;.4H9l/OSc \x0撔GR}Tt?/As(~8y@^ɋTGc'R&cZED\) $pu.qo'd IB+6FA{w`p,8eT!2i4Nx=35mH:2 T( &J$x5"} khd""H)TpD>De*8S` "TQdhA(W<;&vyj}g_896e|''ӓ׫px2~U2ؐ~NXyWq_oZɄ>+nS{vќCkQ>tFRKaTEӷJp&UÉQ=ղsDZJՒ2-u-]v5:`yst5Ӥϳ\thnͭ.nծ{.^:БB+/+'h1iU޿.pL\lT5 ?^p_wW_?ݫsԎoN_58͊a%''p}F4hئ^ەyWEղڙ\+Uv4yu"ݾ^ -7zퟴM.hSvs{Y* ,Y^ KBh̹+m4(T>Ad@D_@Hj a$< i}v>waaZ{8A8 t 1.Y|  s0YH˟5}k:T=_ocbz>>g=z؞s<T;7Sych=mgPlzٻ֟rdDeGi>\iF+h{(7!aH毿#d;S\z@!LA%HG^$ND9DkV^wdλko?ܔddd7' {8#䤏 z&gVToWFZxIE}bT%eޔ g*):E ,X2&Yi]ɹ("R1P\.>2ye˴)A526#.pdfX،3EXnv> ձM.d?IӋp!rf&O.~;9~pʆ|QQRyupd,暱:׌fmz@(*{!+1*P6:1Z]5] 'maI$x"g5b0/͸c]ԖQ[P{'> Do|(TD3(ya3`%B &8VBxXuZPj֢c$U{,8R-*B̅J5۾f^n|5QxS1""AOŀ)IbZfk,3l0Y+:2ZQ afݘY)VL1&KV: L5i'un{"g5"vD#⸛-Mu6uqQ7E aV g|Jkֳ$)L$?eUo M5$..n 6MC>/) q9iGn1&W^G|m~00 a-u*NmzJbp֮WS=qy!4w^2,gxm-.*:m w>u5=P,%ib絽V^ԗ~7ms^$ zPFYDMe%FAlDVꭎm_Ӣ›w}FV4Ώ̈;~^m͔)wbr4g罶O y`HPZJfb%g eȚ7Ȓ/hx roKS"-ymF|P|4eY׶v}}ݕGWٿ)z,qOežDV aO#WqAҧQm aO2 a3N-J0+&kua]7EٸYͺߨΛUz w-g28M~Y:wL'yˁB1A4SWJ)T~`ak`Rn LWj_pU4~W_!\`亭QZ#̦Ur40z Rm \UrAl \UjtTWW EpU ccr_W~\UjtTzg?ѧU_;BqeK}3\xde] ӗC4BGs]r]DJN"1UW߻.g5|:\)]NG_h@nS;Al7av>Z䲠k%dw&_avZ[cRʐO, OF#j KvP^a n!]H>ϋIt)5't*LYִdB IgʘG>vYH?h20[OF)Ty[>ઃAft0Ig;8eHi2QM-TXթ3ahrGI5akOz\[{YcngY3Y i/tO.=:_f*G oqIw/ Ҝ|yg9+p2QȔa0zcgD;|sn^B>x:em-̀k.˫ M=X~GjqpcLIMT^hE.1Ni曞:-]0 Q'h!dҞeݨ(v,?*43XgD<>=<08%ý^ rYƊ{3@Z#cy0A}'!6:M[Q<˴ 1;~:߾~O:O;Οx&Y$y3?Zt]u uQ&|uFC^c+i Nj߿ӈ?s]smK6 3۳0ټOypӬߢkHTx#>9b1O*n6>ݔ&#ɗw_$MAU3 Őj#b6!Kرhv4!NGzmcucB] ^E]>L'l"%iK`fyFY<9z/MƱLəΘ&gR`OU` ,>oo-wuOv6u|~,btfۛd#]|!orwIGbɂa?]]9tixME$2ye˴)A526#g52Uaa3X gěaẩwXH,ST6$M/֘/h|9N3? ''N_?+#!9CT!*@^$ ۭ >(o5i E4Te/d% F&X=Fˠ@_-,6uVF8sSPwڲ1jj$؇8^CusF%/tUk 0 `,dRJ\(.Tc瞧ٻ6,Wl^u ab,ɾzJ(R!) }O5)In2bsNX&ы"51Ѡ9=hdPcFЩNƹaklׇS_Rey?ywӈ9>d%T4 Xl-J復iʒ\(:A5jB,TqRRB+dx.zGL:ifD!'ͥ0mFv2V{ԋR )u%Eٲ^^>Ќ!fJj@$ٙBΆę?\eRmBu!ָTO@-J]6m2Mוܩɿ?9v;uяlj~4k?хKԎdV>Eu +{gZ NX!@qLx0ȷWa{A ,)@%*sY,j*-*ḉFRlWA͑i-f2Zq&cW=/enV3]B$vz9SۼZ_3.` ,ҜE#X J`.Jorp(^d,Ъ`*{ \*8Jg1cx)' Uw(]Wu<MsSŵ+_o6rO10γ_ao j=*I[vXt~Ҵix5}6d=hָ#|`{gן8M/GU@sMZaRÊ⽾{V`A:YN];c-e[<#yLj}w1%ɢCƬ[}.5ed:%"=Bĉ40J`-w*Kx%HAZx֎Cu=5_k7ya%3Oc) Ց B E/xa£!'uk.f)1p)M^Չ*#-Z|eu(k׳@~ / t-i i!M9_)o#*+)PoK%29AhF؁q[:Nf)3X2* )@$y*ЙKo3rZ0#!FiRHˁA4Fpe\+]nod&eb<d&rd.{x[cE/~穧z-՜fvԵ}(\qmze'gH `Fh/lq9-.lV}y5 q$wOiy|?xu]f^JW7'?.t<7?o.QU BP/jP!WٗR5P '_5]ՠ7X5jʨh(qt7\ʤL/mVFL#IH x xix0&;X|pﭟ5͂z'Z꼈̝ȜKt>\9CNx-ZIgF]VmX+WW0p./9A~m>dW]aBk0_CsrckTz;ah2%p2ERB"R&@!A,s /l)N1A1( 1#I;G>AARr3FI'qdJGBvsl|LP5_3EJ}Idy[Oe0pKR謝aRĖC)4QIg]YKg4^|4N:ȓ )GIYJ$Hn <̘VL {y<0(|N)#k7yd+.L-fnZJѫ%k+1/ROo|Do-jct~~TuQ>:l绘Lz4~;5Q,wXG4] J@F7 V H&̧HOTxx+Tv(٘coEUnf깈rC5oKs HzTXFQ-u\Dmٶ+B Φr:7iGU&~KW]IUTfwzHzXC9 ZzEk7xf]B>=!lRkI4\V ^N7{t}S7zʾo=[ƅ'^pنR)<+p" yX 0#s2oLiM,IY")#,G]a2?LiAO-Sz;P^9$P1XIiI`u%[xDNY'x<0>-oX蜾_NbofX%"`&C*.xӝV u n'I^0VEMto?u]A*_ "F)@XU ,y t`B(0 Qz4 —_вOG㳟 !͉u= D+C}fEfDzg 8. HG8۟_oߑfIZnj-&HU7{WY%8((Aܛ0A;+;{$~|k!"؀J2zT7{\x=vYJKT6gtiRb8E KZӲ*PmeOz?wݟ]i>7A{qHUT1ҵg/e G_BG|64 Ψbʣc,̑ȕXi#aQiN4<AzFAf,ƙ] =< m ps4)B)qFLA1p) ime WD#BDa cr c=|;=N?LΛvi(''NQkS t4c.MC"JvKIA^ڰ]5N]M;EL^lIZILf"\\}ܛdɶ;=5R x/s 6g}u P~N鷺!v7;įx>>+ajf P6忹T|QGyɿGKx~+!8`o7揹Ivs d,0 #K^K~nInj[("bկ*g -$}dVy{f9Oz{wIwPt SD|sXӳxt!pVPboӭ^`;m*ԗu>3LQS8rkvO86iZ [m s`4Q%9 ƾNl!^9ǹĔc'LO_VѮI 6j,jF(,-7s>ԓۚ{a;bKu[Q4UԹ;J:w& \ebt"AS+[x]ֹw EN'jEkE9kU5'r`\q Ō'cˤ Rw3"-jEOrNG78t2 LR1.~ i!035/s" GZq ]r_ԛ˃7Mގ.6b;QEAe0یCS`^tc;p"Agy?oKk~w> I# \\(߭qXI[n8z4[b2;F7ϛ뼠%C:ϣLTBk_٨BB2۽NP׹[Q\<ۼd ;-^fײ6-R {ȋNy3K oV˾f*8@^(l,_ j/ UL]\e*[,pWoG\q#q+EF\erqjcRJ +@>DA1ZChUуzJjJ#qSJF\erZ{UR냸zJiB`{#2roL-w$ ޢ҂ycR0|o>eGowuRy2̓y&oq׍*3PL r8R\ﺿ^kz~ېIB܆4,sx%ϪVd)pTKR I<Ҥ"$+2^? 6o dB ʾ3OuبL$0XZƹ^Kk'1pW4:Qs LjMm1@ ς\F"FS\І`H#g KiڛKQȔ }*gQ|f|;^˝5*eTχ?GY>`[v<~\4FE:]|ŸWVr>\>ɱS[CtpsEW.)v5[e!8ޞe~LMTUeѤvZ,H7`-[QUq>ljGhY6iDߺDe$,P-r60lD. m=B'#AZlv6,D^U?zTt)^4&n "H#1> EˑcdhfMJ h"5.tLrKDVs x⎟e'~R Wضyrg{EkWCG Ԛg}F( ler_t~ jL9E{FVyRQ G\~ZqTQb4b Vx*+au" ;;׿ \L)=^|h5[3m͈є1vatU#[5q_F"CTK#w /F(a[0&ODQ[{4IR:{EPDRBHψi9JXa[9OrPlf uсWS㮽X\SMq?͙;@|:O};C4) OGl0O'guZn"q#2T)iWZ ,ױԀ*UqS ŪS9ߍ啜x &UJ0'++,%)ym)|zDP6QNl4c;:=- c(ǐ0cZ,F6D 9V-pB] d QU=*طQ:ea PVXe$#1( Cb< rGn _vϽ=rz!onҸ7p7z &耮wL \^}%%8^]+ְu\غʻkOq+;Ln:25v/oRS@Ybt{#}ba>/P΃RL DL &!eF1+XDȏ@ AN۩^~o#U kb;U~'7nj `⩄ʹq,)HL) 9&=epFZ* GGE_WK=gsm5l]0,ڃ J&4GܿPIpuг񕔟fEõQT P Ad")!Ȩ9efvŜ[؄Z$QbR b!y#i49OTE͑:k9`۞ 4q`j"x'PHGu:pY,{v.r8|Y~a>yJţ—JQO*:z%tLx{B7xqTHJ$U6 VrImNgB=,OpylZ/v7Md)Hrhi4F8Uk #"_+-ſXx3&S., oXlm|!agPEAc b# e@-C's S)ՠ[g^XPg9Ij:+58O!Y8P*Hx0dX 6@uFHTn1iwQ2 AМ  u%Xʆ XUl'r Z{ÁkX $V;r =/T& $Wɽ$g@*C&7Iޮ\:G;"k.U,U  US} է2E*R*ቩtJL2s:!cM2&BEe16*@ȝ "D\)Q) PfK4OϏMw'zUFyܪ?Z88Xνw4tNe~wxeܑ/q.9y݁$:gv|?ʰ|sHLѻd閉۰*ikǧXPfW' 1b<*GCyJPEq.8Iy K*ݎh~`w8G8y4$ZZ,A LI`;I$KiaWyqJF:#B┡>!) cPHT PHY#bV)& qƆe_Bn הKf&YəLxoh42|ZLYME"DUNuh-QJ1EˌQIW67$pg#9Y\i)-]h0 Q aJ.EMơebEjM6-,Ajc/xHc8ZJ!0Bg9]5p-!2+F#&(*T"T~>hϣFY"IafUtP([2/~8\oXjO^*{F}:ʉ:'WCac_Fd=3"Xq$HDnTZXs\4Y[Ors㉼f~ٱY<2}zZ=Az˵Q5Q߀}ێ[$nW^Ö3*'fI!$bѨ)߼0M湿HNeEh-SApK4{իVpp;-Þv{eL[OyJV0\ic+F_gsͥ DMVHF4:RHP7[\NJ'QiLy ig")r:᭶htEL\Z1&U;l_oY<,K^Km3(߶)zZ',55J,B|Wd5E'1J@fVDVdPXRδܼ = l +Q%-ᅜ} *S~.W6ƭTR$zibeG}Jj=$zS}$z60 B`XL" 5`l ag}*qsAzTzW5 Qk 5bWm6eT$x$\$P$D$J*!EMbI#(IHkciVqa)+Q7!72΅H©I@e$gSG'LIEx[G vyD[ko5ؚ͵+{[X/TGU1Ob&8ڷ`O[ ꤾUrtLߪ(5oН}mo>SOPFj">am 24HѨ)6B@2fpP2 4/C0`KDy9gI 6bBjuK&}F-.ObO=qS6{n݅wﺿ\kO 7B8;]^>etHl=n,Zod6WWnӜw-l˺{﫞o5Vvyfyȩ|wsʔ1";x<ηtfxqzb+ Ds9eYhǚo9q/76lj'I8Qi)b^5й?j\|}2X{9#˿>wFߌogNp| m~pO%% ӧ7gў}I~أ?\6g_} :fCh@y8mB@1zBq'f#'7u??Od4Yb'ƀ y V%#cZ%6:-dgfl@7.s>H':Ey-mV98q'G77{_lo6-&/?YIQ%D +h,@4H\*#,b\ A$(e.0&4᪠)9%R)p^]&.1E0~&/w8ݽۜ;{1m )xv.%i. Bj$ cRƑHxLkqv=2Tc-Tp*O4Q1xeIR*&j6tE=K)nL6x92G?h7/^O&2[OKn>ڕ5i>/軇lb4tz;+QڣK6h:f}bٗZ~~/_!8;lD,#v )W?!-oxz}<`=slѦfۇIf[G6Jy_W>MvHՆ0a_v& ޖQl[wWEw"佌dǟ$sfǹ(i]uosWk.mpf'ևm|řmͥmƱ1̳דP[ZRneA1:wv)>rUiShllH(ޑP㬿nnSG=ߔ%r)}נ^]֟N/wJ:)veù/"{B< /O)ÃJwy`ꎻ%yw<ҥl,  0ZŬy@;! G5t!&mV6c]t~TV SzJ)j"˅uyƳKzU vKnI,cdg a[yR"n S4utZiK/ӁyYg@]48{{'5ݻk&ώګfbne{ٳ^8x*8_'hEL+*=Vlv&wcC4*01ph#?Ppר҇lؑҜ -Rz'v彰9cDEc@bPD##9P-ߝ SDGk!ds-fUtL?څ28܆9 #x-:z%CUèt\ p{rjкSV{]gvK]OdTC(BY)E2ZE^Q(E` Y]!`u1tW:Gm C L)t*1t(WU:DR]eCW] ]eNWҕΐ$1 CW.'UF+*U:GR3VQ;*hW|aD +C·4UxAtCWn9UF+]e\YҕaD˒ (f0ժ26RTcq't 0t .秥nh%0,Vw)crmHoEC3 ̟bA"ڂ]m04fG* 4DU>GM?̮s^&iM$#7  jVWN⫸eEOIu\  teA?herBqTlf4Psn.ZιN24s׀516&<P`(#iz9l׋ipɃ0Ejd©FcC3w-EH]},=qYA KH!5ipIh9̩5(.uK$F'1J@fVDTxE*LQ7cIM%b= (R! (4!$@2)!aŘQUўʋJUMs49Wtq)Ut)t&4DWCW,95|t(9ΐH%iWr[3#kOtѪkW9ҕDjV]!`%Ѯ2\] ]!ZM]e\V:CRm^BȞ]eG^EWC%]i+&pY1tъkW╮ΐr&m Xb 2"KOW!]؎S>)>U77Ю]t*];qF #P3A͏ 1i#P;[ Fbh:Y M#ڃg}?Mg*M!M3Tʂ 3+)BWc4W-0%p-2SuC)M3+ 'c f 2JE+]!] -z]eCWe 2J+]!]IC(Fu hoD3J]oDϑd: R*Q{FQ9ҕ6V]!`hWdUFOWRJ+OpENOWSkWgV:ꆒ,q@WվSOTDW4'0Ut)thCRJWHW w*5zRX|m%b7qS+tPJce<,e%r)uJ`jwtMf9%f:MX <- (hU-dJ9Zh%BFi=Z8ãN9 B@t1tRY ]e2JJWgHW0S CWUF Qaΐ# >F\AJVY5HL3+ɀ sU+X)tj:tBU:OBS)8u]!\ hW1(^|gIWRwѮ2\S =x Q-xwum$FbqK`laaxO[,*'sIn˒zmqtm@&yֹVd_J3zfEߞryրQ,<_z|r-mwm6Wu};p~z?!r#M{A9#klyPߟ~Mҙ:3C6y+j^rүQ.o:*ޢqdy^#XoG4m5;~|ķohM1~w/|6ɺϧo7v3bO܋mkBj6w񅬺Ǎx*ۿmaT0/@$(1-q~Q7ޣo??kzA4CGCor)1;ρ _e^PK'm6Գ8bG>ږnh_yJz]>fΖRy~\ Fv yoooZ߮Rfd]JzQpSP%egM z*8]2ts):on ׽?)ݪJB*7e2θKi6kŁM\G;:%E#V_4QC5IS ]6yCﭢ͙!f-%v$?'{j ;KԩQU FD]40MQr9C1~-6ڔ,5SHXùڊ> Vv@I()wFJ-pϭC,yɌaxh1Z#M1ZrV>@%5e{oM4Yti:F+K"LCt(.e`0 3[| c,c1mFM#d m* QN!yD!8<-@#.D$]_*6YtT:DyKP[рOBr>};oNB<A޴\s9l:RIͺaհ)=Hn$]Q^wF8'֤@{ur38:Akc0mƀڄҙܜ`/DmT/ *Z}ƥ/-@M|^YbHuEQCBNFG6Tق1K(#mtAڳZaB BQ:K+z o'0L'mTȗBф i'S BAnQx TTPtC[B Z 4BsX?ԀO+ %М`#(e*9<Ī7J|Yɠ-]^ [Fhcn ee ,&FеYX@f4giF꾃mQnI$yc#6JQjPѦ"jх6[ r  (lj);O-`׬jX;M*Iﮬ(:\d#$ ŨQAQi+JZ26HQc9MF2mv^Eo{jPB]:=n +Z#q2潈.|"(0,JLhW4 ~DUuASPbt,,x:M;*q b LN QCF:|7 d&/•oh 訽Ŋ2YRPHq Ls$eIuzӬ=K*%d@BYAPS*(HnS]TETݳ. R^lebKRtDR7w&%CࠄfNY"1A! db9WTU/<-dPgF\% 9_n{ .USW%$' cEUJ=IkB\~ 3m'bϗg|!<`X*f=f~pvM&!jhX 3 b{PT8xiT Mƫ9D%@ۈقj2AVa1 %OHv9!Lu %tA\xO(z+R|$Q*LFȼb|b1;3]Kc4/zOG3}dH֨Vx$ ;< cxk`QՅY,TG7>Xżs mG5j%eD!v"}I|ƯNNxw]D]eb:\yS0x6C@FD4=R\nb:EjC$*jPKB*q`(#(vPA~,=kpⶦ3*Z.fX;6@D]C:jPVvܬe$+oVQȡ`8ΙL"(يG+^ca,AJ3-VUBi5eHTyPZG(o<*"4pF9KP0'\qjh}0o^y㦗+X4,7i/W}Ϲ+QFGNp]< pff=+[{ XQ- >;Z,Fvݚs9&mF(γF9@[Fm1 PN2z~==.2#lRӑpPaRDHZN!F -`gr5wn+2TJ`]PLMYځ DGCPCzuz\ ;fc_XA|.`E^1H"^S:Ԇ&@u?YNm.PS1Z8R.|.*jJ':Tiᦳ^:H9-й ᧀFSaМ6nmVܢhZHk֬UVm(|Τ@LQX BVcږtz] %'\sA-4p^qC DOU!:ZSDàe ̀|12 =pe6|^HmΟhJ7a#U2'YkO^ (T .nI2[1xx@A4T\R- ,1˱XTZIrgAH 8iBeN V3f !KkBE9:WϮXD;j0BI>zW& ^ӱދ[Vv+v/f֖{I`K(ȱдX5{#^x?*7[P&-zIe#->AoмQjg{5G,~>JŪshnv''^ޮV&?X~>^| Fo8ZbOz^]|Mx_%~ɟ'V\}`-~s*|jussk m]c_/>cqbK[ =yWVqi:}D઀@97كq9^=yN S@O '+N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'uE>qHNh4q?{m g<8Wv $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zNtO VR8';e'Sti $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zN cM8{0OqmH )): H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 t@n'(p`~Y<{u[M՛݆_@kb~QW@XC2.T81.-Go\WZKO'-DW R;+ԡMk@u}У8]~0] .JKWwCk] tEBWzkC,9u5<^م򒷰 fQom Qs_ #so~8_6 gxO\h}d쑎/t*x|f{? f KntPXYQF#,Yx? `z6ztQD哤+ w(+]<mЏeRBWO([/o{}A'֒Ҙdwj buүzj ethTZi;4/6 t DܑJo=x2w狋wggx<]b7\ ䷿~zƗW i,)'m#0GW݅gbkL!)}"Q)TIjVz2dțu1We cB跘bV-CdY.,-fV.\!5\لVEGFM>ǹ} ;+W'seӄ*ȕV^z[e^NZk;;|@- W d4ZV C )rUVMթ$ bU6j {r+,VVKQcɖMжXLv(@cI-Z(h{͘ JݧN1]t \]ʶz:ALqۥ+.:CWW{m+BiJ2kt$п# \]+B 4FЋS+(]`*peg*mrmyOWCWCtUb+tU֏+(z:A2`R6݉ \ۙABɿ ʵӡ+ˑcH:CWj/h~(껡+ʪWk^8:RGBiet+աU\M4r>'[tQn@!&tghtA~&YO'HӤ-]Zv \+BWĬUA)z:EBtI`3te*h% J{:A+l% ] BWi=]Vtut%D+,;b Z#NWR2 ҕV@6"UA J =] ]iiE]O>L լHWF!cCtU";3XЪGW1=] ]YS؝qWUAkyPbV t_Yi^;.gp?x$hYt+աU\ɵqA/h~œr{[HUg٫0Jϲ_4kn %Ъ3;e0*Xv-;wU{U]域?&5szW~b5Y=}Ej`ɸ26-i^s')kP^Wzq-~\t74]R?pɇ7x8$.=Wë<`^pЇ8EkT~2MA<$l_}kiՓdP//.SsE"irG4(s4"K+ k5+kuߊ~ Ii7MULwpS݉sVQ9r\9428)\aQ11QJ:&pJݗz=~L)!n,rF845UB>v4"8f/9:oy]j'mת@}EFEVGehv2ܥ[?]~u0u~6s *4jNUA-pߣBXW;*w38PPMunJLBHeynGzO q8Ig&3a__KZ`ٷZnygZAW͔ >g.xAoglZD4o ˳CgWX=JTºY(nԖ{\3߃_@_r7DG[k:CWjߥ~(n{KAٶE~w154.]\K_d;5(*s+w(.Qjn4;=@V6}6)cai^肈Y&sHʔ9CϹD7$0QQhi!H\ƨ- 2p<9 Ld*0aB:BEMomҹpKBf2[x^캰䲩K+w;_h>ZkoJӃѠ,d@Y)L># Fc{IΒ6F=8XJu ggO @kePȩr`an ngɏa~<gyK\gkzOV׀L"uz5yųoJ//M*Bۚۢ)t O'ƙR5,k+-d<IxŖ<[VeY,W. ^ A+,^NƘuە.]X,e讃lOζ.h_oИLOntv.[2vʝ.UmUHX|L<B\%kBa bTϔpʈMexŜ=DzkMRߥI[#?] W"tk:p[Uҁ/$tҁ\)m!%ޜ|<~[flODգn.l=0ꢳvTn sm)끗J|T>\$wEQbH'WOnSIy%ѻ-0FȕIŌsS1ag>C#zk=*硅Va4XU vN,TPh)}$΃Rg4sٺ.mբ.mՊ,sǔ}35`Y/]ML*/SQs)̜)E/} { B$hu 4e,:+Ҟʼ⋔ݢQ >(kvTqû3Ug33AfI!Ĉ`✍23 -JnRL q 2h.YAdKbr<@H“h赦B4=)8564SX&'Z'o:2ۤ?NoIzJZ_oNe&nƃmTRImW&gCß>rqnǓOaZ-cD,!{S<"5ʑ%CwZI22(Qj)i9=+oeL'1[OR49 STH2TW k,Vi p}j\x^υŅ+Dl_l4?YwwrOMb?r~8ξsm<LBmh;$SJE Jq+wb3U4}"\,i=G^h7z%% mnj\$6MÌ݌ǣ&1ڍYǡ 6G/YJaB!)UmJ&Fc=K4`"2%&nNs4ʇY*( <A*:`518=( 1SHT'횞 |X2o2c[18yÌ{FEÝ%YiȬ TdFx/3N9G&22E ! 4ʈcJ I 0js}0"@c2!B B0#6g7#_%jO8զ\gcVr(/y{^y)V҆uV P"ɜ ޙ8e$"BYEϋmƬ-|(wJ56뺍ܙ D7xQ׻=xяmkwzض]MnylH:?årua˻Osk4߇I本5vxֱ"-ŠK[tfWlAhA۷((867£6bnノPHafe2H+U\ok~n蔔*A)I¦MVYn *sM&hhQNE%" `ȥHy"ŀ<;.O8qR|%6K:Q-ؼr ݨB+r~bCGⳋ?:?_!}P¥@iU!͆t*j8^Z*X96 lu(MƖXY'Q(!1I d&ulnU*0x U6yQ bY+D ns{aZ]`붭hL K/)lF?6s͕$3,u ƒ곞( R(%pDZ"RTY !gUGG^_]kbՎɬ1b `Rrrl"4TTJLa%c!ꭤ/ݨݨ[uiÀ%eft.j!B+". GT\Vm%fl޵6r2oqU}"P/dKoh30 H(Љ0[FB ʫiJyEu+1=:#j?h͂z=XC,1c \HDN 'SH \{` D8.DxP2#@'t'{&:#gGY_N㏟M\{"L/Hln>y78GwWe{MO?ω+$%@8AZEO/y8'IVXz$VKJ 0g'JyҠd]f%Ϟ'{qA K75A@s}^BD'yKIP s8࢈3_9{oa۪E t8dehQRn;݉H_Bgn/4qg_8QArJ4%cX$Qƌ_} ;{NA"w nLE8ej3؏co vb1_j!/>>3Zwuƹvh8zr \ Ebȱi5,v6.\ŏQn@1M7!&{3]ſ eϣ6a~0lx#O0K}6IZ B#>%>îml{3ʸ9e6o|.Km5vE=5v4_fѶt\ a%5Y^|{I1$khԆ榯m ,πs7U,:'nWu/ĦY8#jhO44k ߊ[q~-MBHUBMB|c %\WT7uun8w8%hF+r@7G5i._3Mmux<)GJG)nÅ3\BwSL(oP.W)P>0 %vX|0>4aۺix`LEzH픏C~ײ3lvi6jL,lFs~]>ӃyzzpfO6gs9GbV+xep|]rBH࡜M'ԁ%wLI(Pz0G]}Jv]fѐG|P탷]rn](Q:k9pAb"M 0| 4x,*TaӮUzl @nӻOʽXKrqT̰]sT6pV\9ϟͯw"Pv˝uԖ̹oMG[Y(@~vTӖ jM)?-u׶[U7nJB*ӌo&dhGT`rz\@"RF#FR]tВF 4P4)cթÀsv ۹^3vsݽhyLjh @CpLhԑꌖ 䮜{?Yksa(RÛm=_}Wdȅr[+1ByeY8rHw:- D5e6٤R+ oÀL݁eiEx\KfE.v}]oU筞d]@I zUFϖ/2 4zBgz)?p7a{?8NjTG8 p ;JrN?U߻᧳jY%pU%ol$F:<=Ar9KKr'ȁS#Govq4+JRy|?m.hQNHݢOŻZ ]\ZJ&%isƙߩޭQy +c'1>q+>Nc[Դ.ͳk .ίP_fv|p'FĒ`f5kΑb4t3Ηva}$MHxC#ouse.GZ>=.> ?-&Z9x?m9FGed󨋛l^5V٩<|:u:NEW?T J\; ?\r/?˿}o_~x?Axs囟w? PfpFfO&O#@?{C݇68bha;f·+[ƽ>帵1;=f ίn>~; TsT}ZV<|-#; o7_iΡn.NTZ}EfWZʑv>xIJZ.1[10m$zpWm$*2&HgIkBAWM5l@Y#d'aΆ+kbǗQ.`nXɢcdpk5)iruEEBXəQ lM?|<U} pe8\\-ORK>\JF\ Y)l-e}zRDvnǨ`X{"U€U9.ݢ 5GkAKJV8{4IB8{L$x&}b)ϔTa:#g-wW W,[xe+s%1}pR{Y|_\\gy[z`m= Xj(hų}KNRj~*Rgle*yoFKMJPzw fU&SWZ!]\e*|z=Jr2 UdUȱL0zJʹ8!qs8cLT"2]\!^PW'.^B9}yq?3hz@e/]=JP%#ĕվA8!qEsf?qUUR^\FqŔX[;\B^\6Y`Ih} LZ<02%?ů\~]LdzyӀ9ć|GC.\ 7U&3͈ֆF͡%Ib:kȩiF t?{gV0 t9<|͇E闦5qDZt{G%_[.%,=$^IiEJБpw+ ]V*;v]1Cԕ&ktFWd+ϮM:D]\:~+\b0R}mF䤫ԕ֮u?kWkN]WLͤԕ6t+뺙 hWDrѕWbNt`C]1fiAi4AnYvR*t vbi3)Bd{D`xҾ4V/-0diA'tScu`eM7bЍ@RIW+e']-NntŸ*+׶aM:@]iF%ntŸ+O}ҘIW+c#]\?b7t-Rʱ)tuBo2#k5]13iw%CԕsIё8+zMvŴfAtduSvv]hAT=% ۑ,U?^tZ%uŔJM6teNONVԡNٴq>oLRQTk']=Ix@U_JsIAv#MߍkEL)-M>@Mˠ銁]?Y%Ӌ@JSVyRaHW&nS;ueӚ +UdWLRڍL:]}WAv]1n?AiƮ+nO;q#]t4d\{O:@]yҸt`Ցv+ r&=uHY;_w tDGH `vl,|-#8nLpB#>^hhsJƆ~zcwjN~4H--{`\-zYZ`ZgǾc3--<҂ ӑXntŸ~0Z?z]1夫5ctGS<7jWb}5ք)tu2JI VntŸ j1v]1ΓtTo৘JWK]12cSi2xr:gՋ֍~Si!k{$t?W^tŴaSvu Z%#UKZ~ S)A8AuOb࠻p;O2FA4u7#]8ЍW^tŴzbi u-j1ntŸ+گ}uŔjʮQWUt D7b\iuŔN:@]Yi튁C?O{)Qb)vۤ'ѕs vŸN+ (pPW)tG4 vӑirCGWMG GWYjgZƮ+ z7+uϦW'N `mMthSv5rlĪR6FGBFSC`b#Pls@x2аxA Z\a9?F&.r:?è,m<oj~N<5~45gG? H&^u`m7+ kE/+ Ž}e))L+  d:[эw߻bkW JB!JyHWqEWLuJCjJ)HW&Ћ@+?dJ%']Lb&OvŸ^+J)夫CԕRoTW _$ M?A5PvǷWx~8g?m65ƞ7~P->O\< owP<mNq:W<7o^֛}|Yo:l߳afOQڑm:t']=鉤tEGW ]iF+Pɑ*//oo,qqFhvv~Q&Z_x t?!枟꟯PgZZ|,w+?~Ml{ z^h//s鼾/D|"/kjz eBgUVCW^m%@v[|VݖOddP~5TTg]}[uWQ{ =z-ei[mK-~ٛ'F_l+P TT/ GT˖~85ͧ6G/4-5 dO߿W uVg|9{yl^j7RVzU"U"UJRYfJW;%՟ݬ$~ì#Ew'W7 jlmd~5;([\l2l )Ddq^Yl2QLBS1+7AKrA4D bVJ"HbJEdS'.D1?OvU6PHCӕ j!\ 9ӨokM$r%QBVZ< %5֊H;PkA+T2#֕$QkZR *KnEzCս@J-Z3&$u`$. d2*!2`*Xh!O"=zXBؤhFLuR :C)Fa]q/D4]Aᵾ|fW-[L!UHRJn ӜrT&YBFI)54 #S/ 8} ^eT,yrh.jgQfk@Kk^Pp" !b/ԺF bR/Eqԭ-4%)ۚ}O5//E/ ] b@uP<* 2YWBҥ`r\Jj:&#q`w:(X $bB1-+"!|R(T)h0u*::,0A)ԭL1&H#pTBN~ɉV4+JXmA4SRJRh$ m'+@2 ~ȃ!2!vGxc)EF WwE!Aes,0́vkc]8|*i5ۜ?rX^gHZ(R5M`QP7o5BPY\ץ x/zDV;ƿ0390tی_N:J `. \C@`n t,< [\S#IY ԵEnBf=0h6:W 5'X,jky.!  b31e8(*3|9n!/1oZHs1c4G< ZkVSbKhtw0RAB#R T"QZf HvXﰣ^ uec4X!C|_o bʅs" )o1tلVQ bS! !jXcx*G`'Zwzր U leh~ xUڰ(RVGϊ14hci =72[H렣T/6B'Ek&ĤOd:$ ~],]Fr󇝃rmnN }i3(Q diQ»2 rfF8àe%3B :̀ Au!֭FhJ7F;c|Dh84(5zJB$aY\P5V@/ ^= \Dks͆bتn  m٥0:FfAjs &D$"nB%=(r -Q\cy]jt+-X0B R GV餝ōbp597Q` d8hڤjv 7/^l=MFI 6omOER˳_`GXnLSA դ0JN_Y:_&S?4\&|SF:t@J|xq=g=`_4RN?p6)-ޖ٨~!p^3?ܾm^՛Cnb6GcbaLhQBW1ۡC}5{'XK'8wڹ '):0ݪFN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'П d<3ON bq:ֲwc:E'0r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9N k@-Q R @u:@C3;r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9N Tw >;*\ch;v'P]e@P @"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DNq},ފg/&uiulo_m ;}4,`QpH+mpvUƚ:kV4U=],wp/Ԁw23x'tԗ"3azD}Fs%0'u7->>a (%)>zDF[izCW (QNѕT\}+u;]J){DW ]U7ꪢ (DW'HWZ9aM @*\/BWu߷8RkOLiܧvvR*\#BW@:v2o]]Y]1=+*\BW=`s4^ c/tUʣ_(5<{t0Oز WfG2XQZ+}Ϫ^{s^hx`p?Vt\ͯgCi΍j vݵ|+-\/F.ky8}iuLz@Γn0nGkcLW|1JtY B!^efk7 |:.댠חeoJ WyG~4cm}@{9quI܉{ GWWĬ;R.x`#xgRo})elT!M=.:9 CG5XoLzXjq|qyC&+ob59@ N=ɭAsߥjx3cӊ"%YnժIΉgZ? To7ۢiMmTb:zTzD+myXW6NR-?õ%YtXYMM(V5!,L=[L}R*`{3L-TBBE)M-Ԃi&7tU:ZUE)NՂUMo}2crktJYg]ՃD֙+6:3P:fNꩨX?k?7ꪢ5骢$uuteRNXkp}o h8`T4wut/e\}.rUNWDW'HW3'XʠK_yutUQ:It+sϪ7 ~cM\فZHj?]=]}ks=+V~~p ]U;]֖J :Pyw >7H*N-Op MWF;M*O%G#`#UY>CPJN7̻s{CW}^J9 ҕNY#~CUkt_ h~Y骢 }>!{UUEh$Jj՝WڅFh2_< ym|+.ކ.^\Ms~_ռjynMaF68nFkzt>U#峇n¼*QKdl]^>]e;\tg GWƶlSIؐhb*m 7 ,0poJ-~?)Uv_n&oզ:oqOպ?@ԓKq]b?PO~ .t~Bm٠>t;j|@lםnwUwVwkg̚U;7an~ <.?쏗/_+ף}μmBDcMlk|he9'?<.SMi3 -Xgi,uO'>_6@>eP_O|v]6oͳiƈ_*Wo|߀)% :,]P}cM0@]WۓU_Klw-$ff:m`⽊_kSaNTbRF)I.7MwnV[Ba b5/fXs$f$ Koy\Љr[@ir[eJAf~о vB[C)\')O.(.&Q_mm,z_!AOO") 8Ss,pp(ָ͒ -T먋!Xgd_) ͚Bw)T܍{:[X8 c B7:a1:%3'mCh(9:s_@DžU0z 2ccx̂LI >qvQYF#ZmBnȳOǎJ-cU~Oזtxls|ro|xf%m{[o?ݴvu{ ÇgoDcsI/ӔgQ qY~Wtze|}|]_hFLz}MvNb8xABռ"_>V:U:?^ټ&ϷSC'7ۛ퇃s!gob6Gab-2|iɢ>Zr;=sPϹ >TL:aJ ]gm^?Tr]|qpy{L9uϩ{|yc|DTlű[4x7Z\'j?v6߶t[/j[ qtR\!&B1jJ:e*c3YfEFU.m˵jev3nY杪o^foF?ݵ2ZwK=^[ l 8w,`<@,AƄ\J> >~qtU˺y1y=lWX?|Aelq_Д ELT|ЋUl/m#I_ X%/#r(=!ER%ҚfOMOOUWU/[>O fukwujfGqO=S?;-]v̓ 2GF;}=}b<}N]Puu[YqNΛnfow^;S66'rjlXʇ7RX m$Yt)D\#wZI2(#)cd2&&NB1YA4) !*Q*KR8G*Ű8 VK`a_•Dw_iqq䮿a>ѠrhX9b3傶ySN5@8GR%ruJh&}Q'Ð=32lr^l"hmGMLXV%M$hSsdLKAbTԦQ3}3^Ľ Ch*mLz:n,€p-1N|JK(&S߼T4~>CZсF@bx8"F/ S@v81sl! bq*"ˆzDqIrnF^p")'ȅ9:N"("ZC QZm Rp%smȃ7 ~/#53^aMAp}D?"sJE?pX- uR~NՍ77 +Еg4TBXCmDˎ_T 0.( ;SI"&bQVB;/OQ馘ɵaFrt$2zfQ^<*% i\ Gp 傩DU p&uyu1q ylH]",uQzsC56۲,KQ\8p 84d3 ՚1wT){(9;y%eGftFӪ֜q,Lt8V>@GB.6sΧ5BD@m2J)LpzvUrfv #7hkny %b`+bE M&{g|u9%A'5|ò0S36ᜪk4c'qD nvм =dh׵/(ePDQD٬_Tfjxl)p)E#?ÑK-{`Tqohx{ qdː3Gsnqւ4۵$%w^s)oOS{٨7ܱB#.. ^ jBSƵ)uoH ,C+l<n]-Zs\tfw4G[VrZ{|Qw޺Gy-zxcQ9v~ֹvztkOjj.آ˻6fv~sU_QyJ{7)Ѹ?ߏKy$\<8"c8O(+y}e3OQbgI Y!i A6qCsxd6=脞y S#IpFQZ-d6@%((6M#ωȓ Psyq0F$GH,OxQ+^Үn,$Z* +Xŧ?Qf :BC:B;ӧͩ(*2 6zE!XU2 [ohCIGe!1H"7zQOS@'763s *V|4EKbsd: M/B_In}% zr7 ]Eu+O{JK:N2@r|&JLM+%M!<*N r "14X G6YnDQ:hAtɔYL*H8rIʒ0f"`5, R'J h@?C{ei-v^i.C)U,┪SD7/؎H \ h¹?.q?UK=lUxE7e rtSq+e34=1J9ks˗xeTLՔЮUNJ%"j>R7Z}UXY-EaNegrD]u31vؔ?騋S5ܕJ(9 Dh,@Cƒ ?l"&:-ZJO5R$QOLzגrnͿ-`=d8x&5y]̉6a06rs/oƓ/ze.5Ϟ+Ο+0?d哭{5_FTW4m5HEuhäZJ{J:PMpaJzhYN6s+ᬪʡ0.%yQ;'[oVe, ?zҵEo썑ŶgZĴ8QQ`M9(<[9(tQBߟN~7/Ӑ0)d{L4 #`r 7AV!c!z44E6E]1!jc~E5WJsç.v=RJZqdZ)kC1'"J*U $>Rȶ|xnQʇk*d/1"ZkW"D*9)%t2ZKˬ}bAc*6YpMT/Sj#auxKγe^P* "S9yOJ, Id#(^|fbM`=g ,EA@Xeʂ*l+28˹%@OFbVQa+{=N -q[mۨ?y@1& 7 沭=0_221Kn2#s&hy+ŕ.-MEzetxQ3pRsqsq1B1{CRNok_;^͞W{7o'WUeC,nu59qsFUni$ǛTBُ{}ݛ^z+?N.g[f q9}ݽۥőp4vR2xw釋;u-Bu͈͐mfYY>`!j4Y|4|] Unuu+H%qYǹ5_F7O˿~~>u|z>}-870!l  {鿾yuAhڛ7-Amtj&vfnb Hʏoiyzǣl͚ *zhޞ}&3iʞ>/rNU]eb#C O;s[/6;\#f%1שkV X($@i a%?; ,t^zazhQZ! hBʓ=y]I*Is "#=gw92m)vQidOUDD v(*pʳVui;IqJJvM =)oИP7q9YUtR* *`" uL$ jZnEj5Y')?&'!S`򶤳 `܁DO J=8\d;Qއ^E0_ݠ.=jLyL!֟/OM#+2kسzqcֱ N.R|vcʁKTe7R" @Ċ YΑbFᝮKWեU]$hc$)EXJD}B%EM'd4m KPU31^=^8^EEj4|D++hY7&a zU !o)8,=ס4{9m4ZI .6WgC.J%djNLz fIBG. Us,g \ =^IN\[A)Ÿ]`^ U aH);!J"5'CiW}*v$dvEJ;JWHi%Ń%G=g+/& s5*+*QZF+`:NrA18)'-Z2ȍH9:ͳY<i<<>Zd|Kק5эu ,u~z5bOJ_򑒦6N]0d0gDGYiG'sG%bp`yRj֮`>#]0s9?^]6{'̙*WMݒ{]|lb/ܻM&z4;0o8?̒>jGpBnɸiWrv*nH+EtRMF72w z;a0EaZ^Zr̊9SS{0=ŐѝiaYE MQQUAޫ_Zix4yxQ5 tP*G]u>:f=.f|MNgt΀IOqU]WjkN7ܿfKsӞś3Gz挫<׿d5!TYdrV* ^6&Acw4Qh+*W*2W !*,~L7V&Iy4n8͢(2RޙsB)0|!em?|˟wFqA[cLNL> Rw:`Z~ZwԵ耩kBY\,8Z1 &x6<$&"_6iYQYr}| z}}&nםݼG=]*;޳wdJ^,:nb)qL9"ʐ KS!@X@g'0>8KJ]eۘv)R 2 e233^V!(K7`DƧO.v6")CY}[{ˁ[r@[ ՋArt .p{ nCS+.Eo`/jLNGvQV$ .d9sLҮb[>_.q)g2n{c}aoBI k¸^ |}݇ͯї"gxqϫ'Xn}sҽXCCj߳CFrw;P%8U Ufm3|w\CUW4~PaW54Uc",VNNB0rzIBDc;h:SnݖiіiE*jrȼh s2LӄYh M"+-L1dF,G4: rCLAqR!8rYJfN@`l=r]I&"C?-)l'dCn͙ATG:Ɣ$8N/NjVLFl>,ʽo:Ŭ̙ Y@vuY6(x)GT %!S:tݼlxqSK#X7U^71Bd'BdB=2 '7߮Gu4O*>]ls^O֠Qy#5@Rq̂2f-1ѤY[EĂ' c!ҜNQhrx%z0; $&Yyɐ,s|&'մglL-R*4cG_ڄ/ܕA[…DoۂW4X/ny/zCїA_c uQ !K:3cL*%y4LL7{~Ů^C |*iMi,*.9 ,CL&Ģ=vcla4LK!f[vcڱ {my>Jfv^SFRjCN. %<HI- R346\|EQ%_E#h8|HSo4pcluP_n woGlL?va(:y'5>f-u Ƃ!AIC *exuRB2,KfC(8c*Q+ɋD\&D.~&AAíD"iX#~D@~q\͌Mkiɮ~Q5e;Fg8hm6ke!gICD Y$L,'K!b[acvpa56U=Gn\ l;u,eA 'CWUTኴK[JWýBz8a#+?mpۻ{_.gHC)q߯zfHIQTSa[4{jz~U]搥 5rf/D9$GYL%s*E X{)J܎Iܶ'+攔*HT)87>R 寰x ԂUä6"JLxpwk^Lj 1 DEfZs&)Km1O8KNPX3b<|2qON>.@0 iU)@ 6KQ+Hr"03o/wuO:0(yb׼1my!wFyBr.R4}(`P@* ^T%hO(Z̦DZj|oxrm+Yj/EKyH ML39 ΘhYOlrX2B\J8.p $I[UI$% 5Ƅļi O1b1rƈ(5 #%Tes#&J[=P`u 5b4ȕKKc*V@q2(z+)E2Dfeߍ%rzBJUϾ/Nh=4 \ 1iuSUܕmaGL(5փL@YrC!䂐mCN{7uJ<5yup]YXh0#*γƿmq~=1Y1C%*u'76A7>ЫObsl?#B ei;n؃_̲ T]#BP"jb/M34/{s!ю냛QC26378 lplApAECĿvˑPRMnՊg_]Af0dzvywaiU0ΜJS7i<srXD$ '4˦ ʭn GW8y>{~2_kM];ǘkq;k-Cw1j#" 7S* \Ќ/oF%&*1VB]yccfAS7 S>z~{l?_AyO!%xbL0 IVBn&Yā7R ݑ7A0-d85>Keb>^|q,hfN!׎$ E)r fx',L6e^(y2\wa_'8_Xr~E;jZ=)x#Hh'E$Nw剉$a]Ÿֱ3f][xv_Q8d0QE-EDK%xfn)Bwȉki8TyAGYP$啢k Ex2AC"ҋ"ŅQF4\h$Q?_K6F ϟp|3M8%tU}9Enp4_ܾrZOF,D/\G+tJow?Q,XFг}]L$J U *XƧ>jU;n,q`MW SnNFi_瀤._ʖ߹qҸ1p׳ĸmbުt$2ha"b!Cp9l.qy]`8od:n-W3{|a9̍)_'Ph_7lK_-o95qw߻mB%IbYB{#[=]]XMR¥I0ͤMJAz}xQE@i`ů3\d R_)d#f]Y0F\;FH$tfj8 I`:t`m <2I]"=d핏G~r2mn.)#iϜ;e]*+F}&YZmSTӊGH2, .4HƂ@jw%nV R Tk^9Sm)4O,:}yA +'tE*b@yP>\gZ(' k'^w`HU -7mnVq˪W}PI}V eE3NM2jSug.TmˬG=eRJ&U^Ik<ډ򤳇žNhٞ^Ԗ *" 6zE!XE-LHN!ȤH"7q_OS@ '763s *VQ[%Q3o)MOB_ʕ`݊>S2 *,l+_yͷQ (_ZNDHvd^ @sZi"G "2ADNAy\A73xCS: !xS(R'A- >ҵ2XipUYNYrbDLZk5yY "߄+)D^%cP)KLsJIbAhT%""mExܼ T8"*pc`4f Nq_SNtS )Q:n|?;ninuݱwL{!uÎ;xe濾vi9)PB l:Jt6jIUBW NbeU09 '2O3>v^qlJ\qb|ؕJ(9 Dh,@Cƒ Bl"&-h!oQYOz_MͿەZp0M_?bN<Р`۸ȍ^}ngqo7h'47Xߙ_9x2Yv!-n[[ p79 ٢ҟ|~«D*X9yȍ5J+TFMGG̥{B+ NH}Ԑz8E9;"ZA+v0h)_ψk YaDLs 0kzkT z k/O_c_¹+|Bi~$*U[ =_BUȒ*GhQuJQ{+b6Q幑\܂RUZ5W YuЭ898`һ^զdyz>ԴC $kQDudMT/+ԂQRжؽZϚ9>kz-MT $1¬ (#* Vh}4bA &e䘍Cs{ G* xoƠg\Fg}NTN8e%ZQT#g\>vC_Mpyk#^sYrxWegWe߭xxJxznf>6`GAJsK*uh l[tBrw|_h>!o3+ 8_eW./xӠ2;ȩez1ԯ$m벧.%q{.:`eJqT3(aJZ.yR K͒oAǷqݶK f\7ބy!XJq*< Ƣ#р @*ã8.1d~ Nɾ~ϖ8$ Z+E!Zqny5"=%,i_]y+5A,#RGHL$ &DPc!E0"bM%4F-7is}&ӆ ?~WF?.܂W ]o[=m6`/.fܥp<0_3frrqnGe=x7d}>uQQ@v|ʄM٥VS\]Õ}:(`d_8* w@R:o#%.0o˦MGI-_%~mS`!Ҍ7vngz8_rvKUuuu'  Z%[߷M&e3Q/Z H6xt4y~[Vc.i鍂}oiy6&OQcq>.?xu6)m˜hR.d:0xeWPjUKWԊɪffljgy]ZtڞmN?eG'Ggܜѫ[5VorU-XwMsi tvV%tZZz_b&}Qg(IbGtP_~^O~?~#zd<3Y%XeMvjo=jZ;o2G*ss 8BG>`@) ķg=jXPU|`p"cqVmSۀ &GA1EUh܉ \dZ52WU#9 Ʊֹ UYɁG6[.Ʀ=rphiۉ_\dC _O' qDG4'T@ 6V=j\OZAZAXV+@td W{2᪙fcz@\v;S|i}>n)zf%T}v׽a{fO˦>KfnUryŮ+>樓'C`ȬXUȃaT릲Seںt|.fno_jS~8(AS>O2&2yqpp~o}7U~n9y7~ j<=ϕ5Xlu"1&ЉyE[%\af;>\;ޥ%W0㢫IPY>)G8PIDjWE)!/E8hfû09%!* I <Ē<$c7vSY6|opiӒlvrl&,&M0lTE+VUeg|&J V#1ƜB^mٻhPc6x䢡kby!{psU˾ ̰y:Umuժ8AĔr1kz˩(ďSjw97Z&Sʶ-PS.$ ĹfE,!b--ނ˜Zޞ3! cO/FQk/skZoOB~cktv޵ǟd^ר~!5ruFkhr)ˠUa+:BSStU aȒIt8]##9U&\6zJrQ+kJ&%hޑ9HUn=cXx5mG,Ts{;А˻MZO_}!O?ξm($s$SXI"lAK4x%arޖmy lX¦%'-Ft%hS_SؤƽYtP}zǦQ[wz#ǃ6zXYa|)3KlܒaPker2rF1O|@ܘHُIҡDasezt2Cd #7Dh NCAP _F;M( r#[AjUy]Rʋ1u%tTӆm0.FivL,'0j5\B@-x2L l( rqu7syCޟN?٣Gj_?=4mov^點uI֔pqrڠ41c˖P%_X.1Ś(:d Q<]&BX^cԄSc t+'õu`"ĩjRPރYʎNguű'_XBnN`y,YҞv:[ӗtRRu3 \4q"huF%no-$ gs$]A˘,ѓBL^Hll_ !/(G%.p\atttP8Nl@ՒQ}#N0bC)(Kٱ޲R(W-ahR9arYNﭢYrb'B=V|#xXkH]YB}#|0:.eie_M"Y2 ٔQ(<4%K6'04&D5EOo?=brMQrͅ`r+߈%ghrMόs41L4yGivQV^{z7,G9Dj`8;}ܣ|uq*! ,@?iE_*<ϴ=]A7dLwV'w1}~ ]xDD ; Y$݊d)v̵m֢bJ±3,v뜲=ߖsGvW5XkCQއp#*Z\HZt-Vt#'#1|$bȎҺR^cU*=";Dk@Z'?y[ۀ++et+Vi՝A*ܒiFڤ/a6Ij rhR\*܈ gAh a0*7Q#tHs@7sHLmn sn;~̑EpxϭO榙y~̓5&P8>u&_Y% r(;"rDo%T2qt>λDo߂˻gȭPzʚ1t@k5{܁oG~8_^_HK=4T~_;mSD]K$:x+50v%{.N"H"B"+ZW 9[V>ưV6Ѥ,6bHևUE:+hP QIG9U?ƑSƈY%B"7[}CC[]GkRإ[/m1BG v\d𒽦Ly(*%P=l</G}sx-᥋9QarƠ"%6h xGA{kBbb-1ڍ7(ļ6!8yBm޵qce_v|? {[7)n[4],aik#KF]3#KȖI5y!?~<8CT2. 6[O.`-Btʛm~IN>TSdwV lQ6.6pZmP6p|:>Nӝprf`1LJhʗ}()O!YfVO4T3&@Je _/Q$SGj (~KD:1>f*^=ii@妓N*j.;de6UQ!Ҫo&Vž<9ӆo qO琵fXRbK LRkVWj` zŤ5P:eS*zA@2=z4 d%g֯vީyQk?5j^*z>͞:s~D7g]-o]&=ϭ^-2ϊ¤2ue)ӶgIRt'mt&$䀼x1&q%;/F ^Ie΋x1*@_ TVZ2o K""&I's-I״&xzP6oE4>ޏA" K}P$C#!pGӖӄ̅$DLj7KN;TnJ_@vjh2+aW|xsܸګޢVO`3 @v<9*Q,mG)i!04, ,vDHs0t ULG1&֓Iy# 3qUcF$g & Ή'v=BqCz `rEJ_5KnW-`(Fϡ((! DQ.]R[C{B;іg8\|4BO<_ :HIi<EQcXET̆`}aG"ӨIۋC-'ui_܊A8~f8|!xL+ZOtV1't (bP#% GFHyxpR/{a=ZbI- CBmDzag _j ݭf_:yI/ <(}O4$ 4 C /汣j,aVzO[y"oS>W&M2~L.?\}g%5~DS2^`f*{_yaeL{ɬDxPCBqr9gGYwmSf;ojϖ,p6koOˢ_5I֐o'$\ɠg+] \IHLْ')dscE?D2}N J7dJYP =w d)>{[3%ʡ1wd@YofVR|ZU_GD@Y/2q-HK]{E޴YoRhMm W9375b @޲կr].%r]9 y^ӛ%, !x!8~!Zbcыfa#dpqa0ɰ'l}sWUy1Vўil7 ŸG k.;!rpرL+Jwcj5s?[vqOg`4}~ݺ.cj7[>;2(wF7ۢ&eRzMYϝvT$D)Ͷ4Y+xƳVj@ xF 1:D4JZ w;,% VccwʡygA˳j銬kJb]griRaCFP ; 2,(dR{$Z=";MguuL.{q̄B:p929N%0;\0jD4QF3%垱BpE(հPT2(Є;-qekM4Qb a 7\!o"=2GƅN~T|N1'|&tU72(9>Xx^yIeNNYY CUBf;h/Y gv"vVtSVF:RLݑ)peG(u^? N[lB\:{޽RkȞ;/OF'WLJJ,/I&Su62%%F~+Qxr5:^('B t!t*M5,xKa|E0yS]:5w;sXߜ58f$" !~MkVެi:Mzn#fڽ|vKe{p(=~~z.]WtѲ5A#%•za߮R{WNzlUw$4ʗY(8Lʋ$ܷթ8&&;' |:^jyt!kT&c{p7`I{;Lԇ1EEԧż:_sgјcV{s11@gQa>~nsdi PAwGN(Qhp J&jiu-SA *["bK(F[ :aFhNPƖ`k⬓}9R…:\zrħ/ېU< tzFn 1_}&SմgXބ ]Mji1TIr$<Fyc]łi>X[y`0xqyzl:(CN/V曩Y?'ەK㡏v|~״IfEM4%IKne.Tϥ9rN*U81mc e &  +N, *5MF<I|: ڽδNZwl L*DtڠNrÓ# =c@(at_xUU$y*b{gӔ||(%;!,?Fqx)} VU|-ME)@$K9`-uVn>7ęvo[W'=&AIYჴ vZۛVrz,\jikwJ\;ڏ$%\@/.?\ *vRJ)/挰+#q0p%P*IFupJjM:  &q;J*pԼWQJ fpήD \%iwJRˁ+G xW p?/\m&-{ͤ{ JwpHirHpV` #,y;\%))%}9&N0= 3)Fk ͸\:Nz*Z9ݳ ~{:3Peb9 R9ʖ_gBc|jxŰfodcGy5Wr-c1 Q\zT4q։~@|1%JH" ])3d)ogL(:+sV_-*^ ⇃ΖoW Fۿ/2:Gɴcդ]vÊ-m =>'N=y .o޾ ڛnOaGaQyxVR[>'zY0}̞_N'nCh,ԂP-vδ] »ғ,b=:~%["D.L]=Mse9]mI)V-sxzZ iTYUI~[Z8GWW*7M &kǕVBOWLɚp%M+YZgq*vĆXW=&ԆޥNW+NJejݕpu 6&G;O1r]W֯*>U6\"s6pyPibjdRv[:E\%%̈́+;Z-^ş`C a-K 3!8p*7,wax>U0}& ipr]WP돴B6\ `)D`O+,R׎+Uy W+#OM/4\OӬ٩ZNkǕ\^W33}ĠCW*7YplV_ J)֮ۙTpgJ&WP+z\Jq|6M+|=y GޓwZ+U6\""bDRI ,RέWre oz\%ݓN+X4'VVy.pEz:3ѸoKL0c{}ZGrWT> WOzr홴 t.|W/h7sd;-kwl{o(|He(Z˙BZLr2ܿurɺ{X6^Fv-Nov,Ԅ-2ѥ --PdCaxU+~K rm+‹,-ۉpBn\\;JZ;TONW.q"\ApJFWPɮWц'YyR y>g\GJq*V "$@3]hc#LkǕ\E^WWlybPzW6*2 W'hDz< \gMd҆U"q&\Ap]A.: T-AU]}Ez{fH\:~1H0o_&W쮖 GrWTmoZWvS^ KJz3 Tm * W+%Uށ+cΙυτqo{Te.:>E\E<*3\W*OTmkǕL)*E~[\YgipjoRQ6\}5G= ֮ 7/\ .St"Ѯ WWCOYI_tnؤӳuAuMif6`ZJӪXW/R)n>AL|< Xz#J:v\J\)9W*4R͂+uDkǕi)? CHzנQyuo^奶$d4zׂwC~Aߝ_O.k]i͓YY?7}Cgs0z@_?/-\.o~{Q4~wG4*_x{`{n= =u,w^]~1]~7iJOMֻ~n'ݓoՏ)5ŏ#u|[`|"޽^(rE}5 7g't˞y5x9r|( q 7pO?SU kow [?f+-kvJ|t- 5[ȗޙҩ8O1W1}}/˯n%YNr}P{x~vۏ @]ir9{;2Uc 8JЅ`Jl y.e#I5LMRo ד?-0LK%ruf7V\J9;؈G0?Q$֡BӝР>LC@1R %s#L mK dk ϐPƖR<$k"ݜ|ODL;α'9fqu$~b滌}jJK!!_* 9I tߺL@K^O4/VWlZȑ*w~iBK fn,(2>b :=KE Ȏў;udVh o'3YeohBJn(ˡ <#EGG<x-CλqD  'ʙKov%,2hCGu0cm̭a\xmYJu\Z=~ ;Jc[`;R#wec`T! f{ HJYl(C .VD@5Jhٻ8*!ȶ}bw!Q"] W(c"ȆeBGk{i}3!VQ KHP wezk|#mF2+\`܁#7ê! w8c)(|+DaH`$,A -%i03KRA[Pb W,,8:M;&s 00!rPkJ w&Q% bA6 d&yX +Ttgc( Ls$e丽bzoY6NRl ewPS.(Hur]]T{%!. (b< 2+=d] GH TZDzupb3l>! LbdۃArZQ8b!z_4yA}@+]Vw 4tlpBrr|Z1f rTvND($&#v'3^C. KZ [ŬF@"ی@-$ >:g:p}|tX\a:d.q t56bvZmZU&1>#'b%⃊E ]3ʃJ$JEV2q,{GhKrQ  );n]ՎG⭰8>'U]8pt5ڞwb: l;V+.$c~x^^۫4"d&KR+`+|mi,#z,%\o{q:EjC$*jPK_!80;2PR"hc2ψExXKR`nk 9ej6FcCX DA WXXVve5m1A=d$ыwAKaEG%A& Z/+w۷]_/|7^+,zNs tu!eOG՞Fz0߅R𷳩Ef1httkV?k158τ5rGCo fR/3zѐ|AmF٦FQpPaRDm$!C`w1ȇ v/ ($'.o%ˊmٖdTcQd4tUYrHfsȇ Z5$V)aKhpw`A(HyT*|YZ H==`5d}0&i` '0D W}\A\plM0tVaXÐ ;"ޡӢ9RŢ4Q5zg p l1#dІEq8!ǀSZ])xndj{w 5X *\#DqQI<(O2tSMI Z`+k$ykP*jo:k /5s%\U ZC Wx;Xu-+!С`= kF)a:|kY2>G4pdS:(!2 \ #- @^=@p!`~;X6rIWŪR"8زK1MIa;uA%=))$klt^QxwE@z3!z cP aW^XrR׵q:],K5`4hXdmĴIau%$߼827!Mjyt7}]TǍh#_} bm>41x9Y4 H;+?x6~,tr2Ќr,F˓`ׅU|--)z}m _9cK|Oe6f~-l5pm˲| d1hk*cB+T^3ۡCCUC굷H#"u7J #gʒ@D2R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)W %gR`{@VufrRHqeH DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@:b%{}R`cBE ˆw] (#%1*@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJW URX"%7J e!0P AJcT/&%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ Rx@K!rxފwo&XjnfqTJ/iz1Y!|O%z #+ 9p zHt ¥CI}Y`\o ޼q zu… $I .!u/thU_(':BBU=+lo 8 ]!ZŻNWS,);&B-]+D4ҕ>-`֮EWWf ί]!Jˉ~ҏz}¼F?;]ؘNwNh-CW[tw+MtԡJ8q"Z)< 6he ʯf6LeGuPC&׿2M]3\1 p=`v(PPd>z~k+hv>I/Og/& dYt^A)\F%曨oE%-nQǘ(bE͈5RبMC >-ԜRv22ºLF/`{,f&'Ϳbdt֐JQŎ3Pbv>{|n{-:dѵҳFYPjB*YkVp:K)tgy&"%YnժIΉgE}.-`\o5\ei*Dihi nLa7tp% ]!ZPDQEtu!"f/ƫk{|gn'fnj']hcEKruWs-yQ7{RE"BR lF䚼Uʬh,W:t9>n.%f>8+ctΛO50: CNL< l2'3m."Dgc㲼>Yz$lMBZ:Lmi4k`oؕe\볶^L&0_oX=h[cDi:XϽibqë8[sc]wn M>vG?6_CCbwfyI8OWQ .cݍ*_5W{TBJW#vTxDG'vY悼o_˷LЗ۶xkr ^N0^_i,ՔxϦpV8UӸ@d}=b+WM Z|Ņ=aÇ*y W fklT/7-/D$=[e1s%d1hۻP,W&kzP9)7x訵=#f̚U[7{ G38M Xv-Ut΅=<I. > q$ K6)7Rc%hzpsx.731;DwHF za "[<km?N[ǯM[5~Z7XVEpr'ʵDSU9%L >+$ZJB,QS2:g]pQJ] fy͞eu`_9}eKBO") p&$17Aj\^VĮjuq6 O`NamUreN33]] s 0:Wa%3'!T/JUgnCA'Uk .uǴ)!-L.̂L/$R,anaڄ\k'ج2jj_[fPz 7”~-cVmAt^i~NG%[WE[(xb->r`m<7KRc*>7M/D5O_\i\3zlp1ig9 jX6trǵ* ^7.v‡X(+I!21eV*x^dY>]+תgh齳Q߶@{}MUn+_F`y=&:Ygep)M/ƹeX)yL '0'dh/d0h-7'mC7okǷ-;oDxfT[W{O-wfI}n$c:s}Y>-0{#\F̸4. G=S9GBAyos  JS PPp6%LWJ?MK:͇/j2>@&LGuFq!cB$cP&=*p t ݹ ӡ s ]V\(,."LBT>6+,Wbδd堷m)BRrT\ƜB+U,go+hl瑷A:Ia B8qج{MFߞ訶ZK*Rr܁ú2v~\-[+APz܀TNTD⩨Nb9Zġ%󰣖2kh{{օ~|VZ)|0*]ܲzw_{j,f6*Aj1|&mr7 o60ίL(>:}DTkT-r)äYq0Wm5E3b# tqʌٚs :\ \r#+?{Wȑ ?!7VwW6.H ˗8 _WH^S=R7q";^rZ͚ꧪNQ,TtTmXMհJ5[Xmei WY5la7_o Oe {/jH~N,tq|@Fɧpv[l\6:/R$)ӁN2cJAaE['ɚI_b`Q&(MOv,&*lbA8-㊋y)V8jVVGVneXMuW.ΚN.)Ymk].vqgƵUiiC:+%*KHjZLH#dFzeBISKVǥC<-Ya}ݥv |=QHp=YLdmP|j- d9ϻ̓XnA ?/ ?QuQOJ<ꉝy,icfI.OʔɋxTJ)+HpBGɢ WiV (4VK1L J8HDyn2&΁4ȩwN~|AGh羺:!v{ .6=!6z>^)~{= (ɂM+TG Jt2ۊ,Y+Pn)v,Y*'R6~X^UTILA̱Ɛmle'_^wL_ v2̌206F(u_tǤDN{ 7>@D6@T , ! NsA *%p|sz?0t`Cbc_<~_-ˤ-=Ont9k+ϦWiXlP?N#[r,kP#yI14hqh}D>qv:1L*)66g ?9GOb/CAkj6#0 !|Жߥ*|4C=TpycʎZ%@:CL o!t!p!E+ĿtByY3y7+D@ȆPMÂ{:[ڻy$i^TmʹD~B7ު Ǘ%n%4YN]/=Qȝ&mfe֏m<C.ACGxgݱ~4Tçx8x]Uf^{=6Ww5P 5&3qG{rDxv)+wWri.n1О[SB>5m݌Ŭ%;DڀK&YCsvzqz+=kHZ!)Rʾgس@1]%KR>F g > *\i?}W$ ;WO+}\M>t/F?.Q9]uެ3 |&*rK4!R,1dG0}h%3]|l5= OI tpv,(I&$d$/D9d,L6?Uiy \waT3m2Wgoo(gFד7MeN|8!0噉>к^ic`W-ʺ-p(`hP/E=/xR #^| du@'+6C%1RH,)˲JTʋEQ}Űc1?Q?mL+'~*x0?oKHw)_hy f~} ~#&҉`>jcgΏh;2[ _|#S;O$$I|nnGгSs] EH ӓ *}-*Xs%:4WhuXJ֏nEҔGd4 W~/>Ŗ b%G>71pC31-ێVRROwCP ?AXؗ'7[+| #~û;u7ja_=ve VgnLw9;| Q- ,, cgކ8V ǻL$9~]}.tw|juO4u'I =qyB^+i/6dg'3#Hw}b:7s'i}t崻blQX.1pͶ>x@:r2OF6|tIō(.xکRYq5}(jyֹwq{9MTVZ(p,P')D $sYZ2Js>і*AمķtDV;0Ƒ}Wjq۪7\ባT7<'ETAR;5O՝^m36?(^5hl6JZE ;ʨ.{ɋ>RfuxZ3}ܚڒ9D%$!hW<%gi&`b61uT6eA?& t)O~5!xczg299Z2D&΁|Kkz1ҮVEahhsiOö`ˇUS;*K˩ \+ngMѕ0 XP<(kI ' "<] ÃBCps: 1R9I'!< >څRip3<\I%ntY  Dܦhę?ѽtBj}D74,D@UvJ($n+qI b]qפ%  06=>߷x (o?Yg/WpŷO%b/Q3%L[.P6Q{lk=wn=:BƔgT]{q'xfoYXsS0.{m਍ i[oVտe(k/ڽztF]m΄ak;jnGn6jEx;4FݎTŔ'lAr= Oy>{u .KcNĠ;J(|d!W F/4$ICҽ*=(rrt}ST2dz ^V ~}zӕo SYUȥDNZ `o p\mʃ>-V996;J㮪uZ`Gg?[z\էndZmX7ֶk[ikK.mR`;lk RCms̅q 5oL$sK W˞mX;Q`++xILMv}|'' V 8ev Z6--nw56{sA7 X:'hR@B.V:>ET6F_co$o|9_ :} 5&@G\q@ #R噔#`0ILH1mC78lH3o :7}-A2V BZFp!mb.0&Xf-D{nI~I#ӶYfa=:Fg|.Ǣpl+t`^l/;,YeIZ-sTȍ6[ʫ NrnI"Г HUf{~/^ynxc=za #r<:ӒDFZZa"7Y8D "뒢K?~hekҨ^aгlMࢃZ"t0?^z1ꇒ-b.n6cApd; ݏ2)~6Ó0:?I5s`zE\ G0xdm#5o-ѩ0~j㣶#m/Dn>^/_Zѩx>8i7f~zp~rzxP#a PJp❮!15n`MLgNZ99-tچ~ܞx{6?^xw0Ie\1'pGãJwl/G׳yu~wǫ;ئ8RinMÈn2|ҠqrF4WJըTǥ5MܺAQW&]o޼Opܝ1~ç~}?z'Oݧ?O椳0l"ĽI~=1O~N[-Ae誳^eLdA&.ՅTg@*ݹcWHOMN 1B&@;m)gA2q=+DK.2U8"{ 5pZ_Qd"#[-jl7k}jv3-6N։/#+bV3zϮ~C0g5{/DH2y>Rl }77S=SLSA7"6\xr3yݖƌ}+=P9N7$HS& h8JNh=h>).q9򐸎.dɻW:xӂa22W6s}f^# \ZUJD4kAKoOI\օ 9HͿGo.E׃h%^wՀ6]6bz{iUM+H )${.FDn1J+4zFFXm3RWD0F]碮 ﻺ*T^B RWNXF(REԓwAlI'[_~`l%dhkΏ?i 3)5Fp ʊhN^o0"LpyG77^Fp>L&?/BDrH~ui 劤̏~錗Sv'h\LV<$97B9R?BĕGxZbW?ߴ+f6{yݝps~J>5J5*noQZF+i(G|!6f>iВ0xmEnDiJ\ 'Ժ.u/{br,JQJ-z]O9]y8~ѣOדD[>4AV+S`ǞMԂ,2D- r Q aP/ @a{9ˍJ4*kB@]@0{j~Z#Dˋa9iK5::hmDJF$GYcU^RH!JTRWL#ˮ4c`[n8NZ1nHL(L B $xeϴ9=h ^n%`jWvtp$D)c:9Y iALZ KH QU!<'Jb6BNf^J2)SF[0{P# fV:-!X  li N`Y'n}p cR\+ K1R A1'e e2/3^!(K`DƧO.2G26}Y{wnXcm@e/ 0^l2eiG5]дd(}6G:"7UnTd X5Yh"5Cr67k΁JE;#֙sh(`:&_3I&(K*!t\,(Ǹ)Q`e&r`FqiסF"86(Z>`P@8麲h~5!a (u0\tA.>{}wxBU7׹k'Bkҕ+|o=QϨP6,gPeK/T)TBм>N154Ŋ5k>SQ Q 'OTBCI q"=q #U=1{e2e2E)U@TFK Ӧ US#)! )W ,%g3'`lr 9wLܪ Q\w >X6ۜD!1$9yt HxtR GJd2gxRSY'J+-nH3ȒO姄^DQ+d!4;j]Phn!/M}<=|j|U}u\ORwZləb7]hu4||$Gh]A`(m @EyF1&Rp~&mC]Yc(ΞOEmJci`TtIioe*g2M,Z _;L钋/Zw쪵ye{[qYW!%pp_Cd얄koɉìd.%\%Z5 Y4F G,=)ɩF{}½a5r֩/UaE#V]5E{x+ǬΑtX02(iA oNJHeǪg\ZI^(2Y%r3! n%BI e\R[#@-rkįH:^5 ڧXg5.U/zQz׋v3]fc.*%vf.+tV$D0l ׋EV}чj7}*lUl'5,ܬnߠוQ5w(답X?^ׯ~cz^@/_;g) ;TsI&5Ȍl,7ZL9 5;=s|jQfv_<i9ڐ!В+LTR%(Xȥ5Nd\8n t;l%7ރ#@我(2(&ɩ5F:E {CFlfύ;F]fĤ5ktI8pZ~}nc?MgӸ\Jbz#W eq9<;=Q)Yt)6 Md$׀]6ޅГɗ+tދD c&Lfqx,6p/~eoB2R}>hhݡ}ك?Z#J_t;;TY+t2Z߉a1:$YH.h(B0Z,:ON0 ݳI(vrg+;ئn n')vv]q6m.ߑ?HoKyUٷeɇQ̶F,\NFɡg-k 6\/D$tu ^sFX^/Q¡~9UMնf@o9bo=PFՒ Ȃ*~ULM^J!oL~pp9^=`A e W*|F.Sèt6PATJ$j@_ Q3gqq|l:ԫoj`JH.)bA'Pnsu޶:bVXo-O^88Jĺ-u2T~uy O;üyP ͧyTǃX]Χe\U(Wa[8n - M=@,5;Z(ڌ-2m|̒瞉d ΍WJFpu:c4>3quڟbHDloHC9jq 3 r;SHC3KE~ l=lYkwŪf}#8O" Jg:DA M"XZD¬e׸'sH%a ZlSF;CF/;ܨ >NjɎ fqmu8(u:*([^ A\S),؞yaޝz6\KPjNj`F܁o{*57>/ʔm)j(&`Y@fFۘi Ft bC.&!;ҽ :Ro$BQ6DQBO[=7RJXBr 8M(rneZGEdQ0A[R.k%#RHDc3r6cD5 Bbw_F^G߭kZu5ob>T X눴l22y$˔D ,Yp[}@@)9lJѡНVlOt'ZYjA0(>ѠPTj%K~VrGc0pZ|@-_?u|${'h`J6\s(&5`a:ꌳ:#7 ]#Ph^rjT+J)~)E;;\ K.80q+(a2t&)GsezQX>|T*>|Hiq|ؒ$"8)EOhNJ5u[D2U`4墵<>aҴK8y6hy,׽p+?IdNLBlVxXƀhHUA }tM&c0^ eƱTml%;HitL*"Q* J3K`NH*[-eu\:`&ӲT;9σ-~I9 ![uk>ˠ;|en'J6"Ȏ%KV ::RLݑ)piGWG Q鬾 N[lB\:gޝR+Xȡ;+-yYm7䳲5ؤf?Kp5R-鉼n+rrz)#B *:r[!oM #,41sv|(#L<\SzP5O54QVyWx3]ޮH _si> 8_TkKrmW#|$+T5 :N.`;ޟ}wߧߜߝpoߞN @fF"&᷇@֬afh6Cw:z7r-fbR\2aQKo o<ҋ 0$sv J<4,Ȯ(U]R(r^@ 9/@x:+4PklLvخ#J~:>%%Qwc YbRmRmdҸ5WܸH-դבЈ䖱^3ϭiNv(O4 u)l9%2jJKbtTZ".Awa;*ٞ8 ;di˨trVn;O㶣5F|oYx8%߷"3#Xxl.9-2͘A6ƒ EiC>~\M=J֙uIA |i5 8"(\3#;K ֊LTc>PޛR`>AA)xVΜlt଀xvS9K%@=\gT`6 \quW9a89D3 .夨%>)XojNu5?~׮> OSF̄R ֥*H# x^sH{,/+Ҙm.(p53fcsFz߿= NKoOU}w's~C9$+}S>OW7ˋ_OK◝l+9$p3WA8"3]wIJ w/?֎[D5۪ƙg)*,BVr>/ }*t`ٿ,ϢI\891Dm$9hk@4~sAjuKqdٕb,2097(DؐeG@j%Uٞ>◎֖ϜI*KErG\&$#J Ɉ\ ),5z!UJw]Rc)\DY'D܇0̀{P&D!z L X ?s}z*i>" YaM>;gSʽߞ)b E@%4:sb"i.2 PJs:dc8W}[{Ɣ.;2Ư'mOMR`'!)_ꂀClҴ*HsՇt?0+2J y\JF@U!J ڞ/]ڇۣ5vzW:Z,FP1[e:A;\ ɦsJfHV:u)Kc <+sR Vbwijj8(oAϧ8L]4?Ә.cj0yuQ[=o|4ǥD.2Kə>; B&֚8;+n.WR33VCJšlrf6 5٥1%4λg%5dɎaHC \kJkXؚf쉅im`u-{, w^jf'iܓ/8NwAWXydG@`=d 3zϴ֩?* 7 V%uC | B l6Ġet 4(ce!J3M,v߶َa%E$dQ zz8FR&)RPv5qaԗ4˳RjtDlM?ED2"{Dܨ1+P92MleR3WI `2*(ӄVYΘ6bcTxC2R3!h%j)Ҁ7\ƒv̶َO>u \Tqui5-e˸=.q Ӝb4&k͕Ԏ<WrIۈ[W,EcYdŮak< l?cXrcs!ǧ~z5no*O4 J*H=Vo#yvŒiO|~i \u( r;P^e1Ņh\V^g͓65+05B}Yfx^U=tqLΫAcc;?^d bѡyCNL:$',"vW9FIB E.!(lFύBE *GJkl~VxOڑxױ,}=c 1,\WEGDMG(Wtc!g]'.R @?Ch+]dO|c+a@IͲ%}) y$`l\`$< 9x4ZsyС.{5ƳlPz9'+<),}(2d![(frȳ'wa{9%-/[MQOʨ 93V 奀 t֬c:dt-GCq u/,l-Kmx1[Љj28aCT-VZԎ1֐oy (P X,:(Ya0MSDA"&1LA㱬uxu]XGGSn(2iR%8aEup6Zd%ͳCEutR @0h,kWr^^ qM=]yotn|f1Mۄ_ o?0|W7ft6|Nbj?bXz6JQX8[`/ez^"=DgmVSKDrCUJ^eIHzTŧ4N/?JԢ?~|˒ƃyZIUTuv?<Ǩ f .~iW]yS_]/yWM|t|qAv^Z3կ;\MpG$o=g.9MmM57w}7%_ZoEŷ!ĺtyn`B(ڗj'ozʀ?5c?^.͵)mۺ_w:34wjeYV^VY#9ՂN1W?u,F!sd_" M*ɽn\hz f_%|]Ms j ][/h(HاAC_*AOtgA w﷟\,iiDT> (oZU[݌&5nIx7t̄ . Q3Wzݩh7M qzix`º9ҚjZmW-MQ6w,(51$ϡA-;΃/!ژzZoznz CP("4xQTBDW283!2 S9D* }Yt,S|})-5 uADgYk lYp1B5'kɊ HсLz^?T~mM7V`~aꈹxnJ.AϼD;iЭSY4^Ԍ_Ռ塤$5L UeU  |7Ӕ{wP!SiSkNrQɓTr)hIdwH4Ⱑw wIvf?$X૭YR$yf`[/e,-[i/,vWOYO,mIMRGU6QCQET{"3̥8f'wY/J:&RhlAk옃MC_+{}N.'ÛqHMÏs77Qf즊wM%)ǁ-L\4Bh-7 ])=NKS,i5esDRbƂa)Fmf(rJzA}eiMH䶬wrKg,scADn17P*iׂ1P64%e ϣk"zhr >ޔ NUp"Ô)^aJlac-pܚP㶄g+V+$ȇ.慆ОnoV uwN8]кxvχL*9fGZϠCA6!&12Am!>>G'9/-0u$jA$uXzR2 <#:X垲/A' ^)DKFk%d@}.TZmbLc-LҤR*seR yLx1L\mi\m t(_ %H'qe)s(4!1) Ǟ[=?)=5 z7yFwp)rR O'㰘U)^Z&˧7˱"h#zItB'Y(Oh~#=9%~4կfpΫc.]՝'e@Lł}2MP#P'\lz 6D3_(_@bRBqaPv2F3}R"'Zw}bӂL5;9ug[+lciڟuIn";oU 8ֹ4 $gO&>^Ql>Y pGK f՞zkTͷv7>G j UMf##Wܚ˖|hn dbs}E٭r6bѵ5>}x6 nJߪpG/6tݸ٥x6ѭ/J9.UsũFn'V٥ [\@ьPnq.Ҷ- nl=ҶVt; #2G}B,j9KfH22’,GhRbq ~ܱ8Amw'v޽di?$p`2znDm4IΡd{tbƫ rrxD&:C0B`/>Nq>9d ޿[&`1>)@j9L%"N$)9KfL{aAnOo+{z3AH 0毨.uU{sO9~K绾*)[a3y- _iWfyayO`gWo0i0p.T"(d@PȐ("Llm6]QB# :ECD*Z Ĉ48yDSH`Ljlޫ:m֡]]}ܧ#II:5`:[꺡y9:I"̍BAb8c+,c3kLJq$(4L6I'XØAH !ibpOYqL4Cy]Iƺ'r,&>08ÄdrɷSRW( D8t]clL;h3ŤYQiP )$V$jU & 缍 [|pQ/ ./Ϙ>_]Lr }m5r RQ7Qtz=:]pb2[֯ONga?'X9{ Wi|qJ߸4ս>ϮfQsvw7ӫVCa e\-v{]^͞gO7Ӛ4ݕ,n? 6%6t olUeyV`XO%3e*2ozrf79Κ[t{N6W[* ,[ȴҰBpʭq`*);ap\_ojĖjԉ_f7 f]:翿?P~_Nj~xq];qO`cV'M"Xl<3x_{ j[4-ܥV{=(o >4ڙ\O7{P6#.j e]nDWW%(6l=7){vZI}:CebGCΓv(ݞt"̣U^'yt%_T,$5*1#xw[&gi\(d?U.dGz`b#qDȖ‘O2*o@HRp)*Nds ;**ݚN+k:GU<5xœ^y:,|Vu _e!2qr 7\@Ar2&ZAx9z=g/Ta)س/0xb.1PlgI,e`Er5BRg"*;V8>G#-d9ځob K՛K?9XÞ͓`}^ $kփ")Ƿ3h&$XQN$Xa1s@ux{ޏshaUH1[*gc):HbblQS1gEi Bv"CTU7BE>J'UZӠ8MQ2lMǡ|\>&=K˝!g-(rxH,DrEV(~<~b:Jx$WC/͂ dNl .DKFuȵ1.Lw(*vY%TzӫfPc9Un6jujEboP)o珚nU⫻ Wpë|g]hK L+V7@kߴBjLN.Q+Z=Wq PΣ*\)*)Jn#W#נLw΅y-FZ~W\fϒcjnj?/lpʧ.ۓij5gj qNa?-<:d9H2~ЗPxŞ&V/ݶf.XtAx 򊖭:Bp5'(" gջsO{7QQWᄅTI-2ͯQҊ3 -g€dwW Em~xK|q Ox?wFQ{(m̙ {UoTJmoɇh"0+G_(RJ/P9I#Ld@@2WܣOM1U4;,zrP;)K^K!'ҁ&b&Y)A3CUT8t U)п]?%ԛDqboH;<*jʇw}ɥY9DXAOD  %I;pYhGBQ-g e}r[X,M`)">gj6_AS\gX*^9{w\.r+5J}{U$EQCWb L=Ow0 "HIYE*Acߞdlrg81pd<,H t X82e(g">pFoJ#68R 0/U{1[,}*|lf"[6qv 1Oі%iYg[K}qc:)űx9 a?z: =KѹXIZODRΘ{倝+X3v6*+蹨$&(gN]@uŸ|uF >$.?t=}uR%٩@ 3RWIgͥlUVSWWIJ/Q] gBJJu.*I婫+Rbܩ$}+a]6ei12* K8Y *J6uZ |_ 2PK&,ҾL' \Ӌ0ovsI 27% d=aXN~PB); B;q#Jv6p(Fc(bOͷ~yLhXקoǦ~87x -,1 4`L7nMwۚf\lC2`^!f\p' =$9gW>;0;S>P_H|G"=9DsRoK1=~ܿ_} l(R5Q* K$|S-S]ՔG>}m14~lŞd~=.~~7|j6V5B!v㐇PX3rVWMHK>uL0`c~0pi X{`Y*$Tɂ`P~}gv 1sUX.(Uc$VCX-4Qpcwk *4k ,Rg2DO~ڃf \2N%0l\y8txxzNǠjwz(gʹYsoTL8RJ4RSP0c謘dMWDK!ᏎD!Sr)m( VH{n00ązaꅠ% ʿ%ʿ*K'wz3Po2 q  Vj>F^* HВ!JE͝'5K}RH SRZDPB4f (ufSŰF6&>"% v0˼R`B<|Va SX1c2b=6MVHKDfxMd۾O;hb!-q!h^W!P3G|ulÕo(]"T=CM"X,: ]VM5n@U "-N5T/8wћDZW'`%xW2aږ*O |_Xr>IϱI/~0cUP-4$0) SU`' u=|(݆h'pe& h@b!/km %HpjH2ћJQ HCP\Vo- @ "4r7NdYl@P2=ӖCyh&WnNwaoalɣ?{}mxްt~gfCֻj^)}oC䆄s3c +m,VѪTXv+G&5ۻN~R0=dmNKh]?j]6wsP=vCgͺChYB˲wy[=/>g!?.cÃ~iz?q~e0)&Wzb9}i?OsJsY;_Hyrx0H60=m;Di>Kxgw{~7}ۺoo}՛ a=2H$FB2-3Y 7I jN"F&XEn󩖃>8.B]/JsQLKdn;j}]Xdfy_\j^6A bƸAE-GJ= A摳4pC bX^-)N1mH] ~Lݭafy!"$s, " #1GH"QKĐ^G6.+'WUOy:tDGÍ/ 200 #*ŽD1Q 6<{j׍qq%V)#kh/ `DgsB"s>R0 qDa Ǵf}Y#'20gw9CC (P۲هR|v2^ʡnN\?5vgN{R??T?&cX-`*6Ixx3Z8K+׿ӣ_|K I]A(&vUΗDxO-.&zOԙ7FZeQ&H 0,6rsPtaz~u~;ݔ.6P^#]ͼE;(ϰ.)ne;صf544. _5}3p e݋=i=7M9Z<5 >bӳ02k7?"iQfŗRxP,~-\&.JS^[^kC)WN:Y_8:'*_~ ^ >I)'M^3vp4_c7^W=^W. WBׂ&Ɓc.3WbiB 9?vb8qkb_|y:L񷼘 ՁEhEt֙W0n[mޢnX6fCT׬3hwi&mR2nHi;y) |"Yr < jSA ad8(@\K5sJ0[]̈́L@t6El&CcHb`n9X`Pi8pI r&5e\ԩ_g{" a j7;GIE(o&`s^"=BV2B"Ġ'Dh/a)`wIzL"ey~ mBݝ8>"FN Ƙcd< ֗cgmI %=_jY"M]56-vyKnЌ0ujnc<ȡk4@f[cR;ĬWL@ 5Rz\Tknc,'brYë_Mi}7O`C&WaPLc^C7k=׮.>}m14~l0Iu)V0ߦm}Uf}E~f 2k>$fQNKibM֞ 2kQ X̕J"BBz+dZK 5bS/ӮFѩzcr%KYYdZHt*eXDti"̒;3,Xܹ?Xxw?bu֕myKϗO"`J6\sa aT`KR KgQgtJܠ63b꾂d/j5E{mo[:H+RkAUEm +D5L+(g LTKX](F'6Ug*N|%)>,MTBTБ @. %7@^ X̓B0 L(Q޺:=iWl=N;cʽ3ø` Ìi/ȟnKZ/R=).< UfHkk?߲~*+N^sxteݗˆAƈIpK1Kuv.*J]u1VR&ޕ51/ɺ+ kd#<ވ~*"f5A`Sl ]ՙ_}Y6G&aHZE:rJ0} oU0iEk .B߆ߎ$נvل[2{-n6Bۻ0O\PX1u0UBJ"Wi(R4<* Z=RgZ˒ ?[*ˀ9IVjV\-I?Je,}Y 5ܲu5@$HBshu.-Oxn M ߦ&Kn˵3KZtU 8-`R3-d-J,<:Fg|.EMK1 *Ʋz-IAH 2GeAXmWсܒE'C@#uNw_{)pr.xt,RPF,"ڌzycL r.):hz+w@d5WCͲf^%f01xP01Ԯ!S@J:2&"Ą ];'@$iP4sRK.Rzfof 8~vK옟1G~\zZ>5=޸s@f0e`9I'f9Ida~)[-%9|lt N?"m'讀*\i!zWH/[rFuāLYfZ#%Ox_3#z'^V1 3AhͻU\<#7}A897V7c0Uf3~.|"U#?,ZAK0i.CI;;EC.եqsDU;1.i8#+UM(vMIlQ2 /׉X[͍0`ӳJw5k#hqqoUG]^Q7]+H%jiyK)kR'1?x?>wԐNl0]W/UO_z?|f`Fh ,/ WCϻT[Cxˡ%]n}\Daܯpqfq7$/FtfYy@[ _h[,Ol(ѺMf~>2Ғ=?E#_nHߕ@wt# C' MX3telݿ?Zg7:>wd"p > Pmb!+DeӠ}2H'z[o6,!n_JuYB+&t-ҹXfO+I1iNNdv.gH,Ӗ|%g:ʰ(Cv teG=ZUWŬNk?y{#}FO@ʳPD2}&dTki,{3ЃBw|e'KO_&'!c򶄳 `܁DO/ g"D%[Nd'q r<IJց6+QDwP2G>y|sZCh4\>@ZɻʵRL(@\,! ֪=gA4]f'- ^ertӲ!c8Ou*{FjmP]xҲގHi(}:OD.=;z"vd8wuݽ"|-УH :keX{ndT}*:^:WuD!Ȓ\ӘA<ECH9< 4KިnX߫{ KB/{ޒr+3[f:׋e*¹}F%Lq\>C 9%#2O:s$MЂ6LNw7yQ*tPZC! eSފ/=U̷l݂oY®x7E;XzEXawgD%p`q*\ev,[ox!9Y}VVA5Ԏ/ipvHku(;~^aCBMy $)noH%v"KG+3CXB J@v% \%$#J Ɋ\< ze}Lv{uv tRtApsP_Vkք^ڼ"g^vd4M`>uh(Xĕp($ӺEJ0}>3GN\ʃ";i u*R*3+錂C+XaWE\iuHJxWsRq"UWE\PW$f/NS\oճ+ÄS4R+I:\gW,";vEWEZWEJݗ{p`-"q׷;WEZ+WEJ׳WKi|@q]iUڋ߻#ƭ^['O W>ս&Wp{K/pF(׎`޼~l<9z;j_ Ө֑aS XX݀2w._jf{38e1v%"ǸXd̊ņ7bNnfKw]'H|maFg__*~U&xMa84Z)z{CmSkJ.UvN_NV3bpkaY%O4|$L* $նn6SRTƙtPzN7.z1)u0lk͡u)Eƞ#rfchqV3+x:>mVj>_v!J{nf^߆ɶF\2e]f} k9E7q ㇚kwlciOGqzyć%.ME4.d5!~dvVg6_ae㽁4=_CJUrovb @k䒔cojH ICҌ8T%"A4LwtOiRQHv~ 7XT7o<SB2`зymOOiQ?{In_yw遬6"ڱ {"xo(7w)4 k#]u-nolś<ŽU>el2Y$YzI[J~>RI%%mHY}\R=)Hw.8y*8?ÖBoP1+~^LәtV'KWĺ5VXwE׭u~;K{W4G"p+Þ nGXrmO}xps8RS6&bJSzw콗zpWF:"6&8+]+n}ȏ[!^ޖ1\sb͎92ObGF[jdHpm .FF)rӑK>2vfW^ ygd=Hn~A<ɗc{@{}mCINCd\x?GSKo(Xmq[oM6O򴸙XȄM-1?ڧ,)'Eqfc%p:ſGj֠6+/tYp# 1S yeυMDE=km1YY]\";@:R˝r[u70f:$ʒPl3<Vdz~l{obLZ7"`y0z-\СlEF7Z*n= T@te ]!\J?hRHW'HW9H]!`i0te ,c Qڹ1NZ '84Pte`zmXZNW+R۠tJ<`'EW.Pʢw(5tut:!`M0te`AD{ʢ2 ҕ)iHޕPpR-x(e]"]AmQ8ty@+ WPʢUw( Dz1th 0{o>ʠZLPr]1bU= y]w(qv]M#`$p-Z|iDEi&)]YCW1}+RHW'HW\`7`-+I0ter;]YD:AHxp` &hYFG:A!Op•4Bhʢd* ҕTʐ+̀CWЕEk%g, ҕRZkN0h`rWӕEd+L]!`Fof2 ]YFRFh O)-]!`!d,BteQroz~F `+np{Z\+N(,t#]z{d&桜<$x2woLFJb3Np2NorzN7ک8i (%KV;>*#h\k[=@wΤ:9ORrpJ-X`I-X„Zh5=`QS 'Z`Fq)+ CW.ЕE\n(Ē"]!F %\CWЕE{t(om"]= ] @I@te  ]Y*ʢ5Q'+I4g!ѕ,\̶D DNW%S+E!CA0te ,Z.>RG:ETJ+ X` 'ht$1wuteFDW؈p+ {8e Z1|At UgqA0X>O st9U= emsttU=eѐ ]!\NX(teK{(AE:Ek;}͏*{zBI)cRT& FR`h*"Ci4mQ*ii5JDWyfnpU0A0%HW'HWv)& $pBWTӕEe+$}ҜR[u8K A0Et(;>Е*!$BFCWA ]!ZEI+tU`Z.qd2MSuah~[R6$zP'x%=%"3RL&HVkzz \ f`TdR8?t^u1GQVP]uMg*-JSQbםZ k=ߏ(H^ "'٧b / ZAE_tw{ϊ$KϲvoZ-d6,V|8C}&77qENwe/Evk%G1"l2ŸXXLFHK,\}[7w-'~{au^9S\ ~?HOd2}J$QBq ^2DʧNIyNs4iu4Ix=/in]9!{JG9.k>!2MUI[(@O;2`" K$s!7S8-^-P3FR8L>ڎ@YLGH\I %ZNGrM/1]6UNھv fhB=jHm(iaTG_A/Lί|$--6y ^!I ǨeS f$8nh!-xDc&Sp}&,M31kzƬ Hne@)aJhl9@@o;U;)-| 6v he|5*+[RʲK=)].I-0폊_$)h)huxyr~6A#1|!X ə@>Ckpe PZiޕ;{俽Nu`8.LJ!>yWxtUZ˲k,Z,C.;üXhh ;5mx, |LZe,"%͉(eAh p9w-UN6Nv0)ߕUU:J0ɧ/0t7)]%/dJDejG!qlI=G%Q2Lɫaz5>%UWƘA9y \y[VRNlY5OͿ3;O5w柆)յEwU 7^hqfo.&Wf~2`Kp`Q #THhe# W$O&cr'~h4ɨXRf˫hd LrWƟ,6㯉GZ_b(nHJeIZ(HҬșLuLuJF t0\kY "IAa@FL}*X?WE+637*ɗܰj=]͔%(3 缟@jdb|0bL:3gS|V,Q7;@"rS+kő-os3wK^w7a/ſo1?jȶGI&d4?/çrSʿ.Ƚ@?}[fdx4&$ Vw^ByeL߼I)~F,~jD kA %BAOi_0O?^-r^mbWa9m|tJ?Ό+.g/y9!&A@nV(IkF=޿9c1l̮I2*\RrG[Jj ԢGNNLk>`k1mw6AM8J;Uh!HӘ &#dIbv.Ѷ>NnpC__]/K}z;B>o2 :@hvY؇/~{--gqYm%Y1 MW-k]ěks*لvQiE?*;*@bZޯO;vNyˍE8+O,̆A1.[u7 v]*.pnU4/~:zk:;jp{05ؖc\f~-a//AIX0`_bEEX([yJ.b$LY#zdrR="3`I*ֵiXūgJ'ci;~ȦaRrJb='uƺ0='%̗lhY9l!9cK[t־T3ӽT`|KQ̹{i5_!L*c0fk7ojao5ј'f} ^AY H=,@s)7IaTup86PnvLlX<8àBL@Wa1U%UCnZ`BPCU Gt\, `FUfFثҿ^N^g\,R٣#Y7,(r nHi?6܄ѡlT\&ԞǴ.f*"Ĵ.v*E<|;I262sRq(6AkL?{Ƒ OmU!vR=Im}9٣DieoP7[=3'v6I F-Um<,:=E/Y,-·w^lULVܨ¿Y!8K+BNv Fr3~!sm=֚KjN"O]ޓfZy1chW3&F``Gx鍃~^NB_wR # Kg~i%ad|Ib4.Fyfpfu >⋳VOd@DbL~Sa5ɼ9t{yZemN3Ԅk?m ˁ"-Fi>7e5|o 򔬄Zx+d =(ezBd ]Wm]03G]Xu q>jjYtcz1fHgb<.@ OT=< ^RfD^HHھ\j4Yd JU=mN嫜%kKf6S"i$p"U*pι &g"O#,\PxS,@ձ-`qkgф㢭- bL <+qWo; Άx1clt>E%d$$d?r'kEkkDu*=W1,tƚhbZv&dcL0S@ N$edK gCŬ;BQ꠭3٬fZVC~/f ]QNj&OXy qtKMNq"A/q~h'#8؎F>n@q{ǘ6])v&F+ ˆ['Tb./ݔ9})3ڏ`'d{!w%q*ޜJ/44FFm<ݟ@y0aPVѻӴJw´ܔIXa:HmUh!69yd9-:,٭-J f뎜%|> 5#zx:!wQT$( J,/Qʩ*͕&_cŘ΂1#qW`_Zr0f6ώ.:u!Rڭ4ls>.Q$eqUf<%Y\s;c2%'0K'(J.ue6π;S ]ŸN`?5P͢$+x$ʢFw^I h4grv<jN0"gZ}zEóxLfO/ oY:\,4Q<!=%ض?RgYkχ?)l_1LqRNPD$KDR Dgp>|̰ ±&B}xI I&_$ HP*aN}ϒ[k~;^k:dHw$†1,'ǑЖrI3c?~}ͬvOn=MOŨxF% ,mAExٻS1Թ:2$ ;Vn02dDF,.HÙ G7=b Yο .ޝ >-=-o*+mZNJVY \-p,~^+ ˼pȃ.(U64,fYzCGcmGf%E٪4a!)-hy.%;:t$, ;_(0xSvF$焸J&sߴ#J ¼"Ғ%ٜ- `Q2\$^z4>j٣z \eJ3tx1m>aFnl6H#ۛtaU|eSQ^ Js{QPQ0m`X|߽BkfEITsKwhnO ><7^-zO̕9R3iwh]?L ! O@b~uΩ}p7C!HQUuRU}2Œl&t~1O27ht=<S*%XUL%hNTfF.Tdk6GOq&9Dɥ >4} D0Z;2!͖Nz9#OD=4Uq@V) fd𿲚N +lQ%Rpn\3V$u[۷4ocj*4a)|(acX2I7! {|̿le]Cf:ɶ+:!W㚌>UZMF Dh4M@e֔-j( _g7UPyH 6-LVR6ڮu^|JȭǨ?5-.R\A kו`ghȐ,vD(w-1)E e7$cƂkLGIqAyI+sa rACk|՝^ /=n`fZȧ7n+đzkp$Wih!d,jlCqc]K+^ϭ$g25PmgLh6EeD=#!@zXta#De8ܐ>=!'5\_yU+A|z,߷;r~z &&crky7!0pAI-J7o XePQFg `FJ\I~J"-EꇛfF4!]K:\? b$4D8M/sdx#RxjL.mr}J #^Uq"٢G@vU+ NT X i25lmj:u6PdHݰ-8kLJĂTm+S.V`|_ \l>ʿ\KSvj=ܻA P)Gq8`!2uRWlB JZyw@dd#  E=DixjۍWEIR._vPR!Ī9]|)2cY%1E1q&4K=<"+On5+TX4V[lD,A"s0DvqK8#$Lag\+bC7x^*NJN6hv/.vMk$\ /B$H,,Mewl]I.wPgeul_,)WSoω+Dv`勩kҡa)FN󿅽Fj>c, ٣P$F""n@U$syNʡ×X3{=7NO)@;fFzIZ {۪nxX/ݷSw!AgHvӁEl83f(e3S[R֤V _<[5v+F{O+ְIWYm?HFVK):,o+]nGNWLqœz\.#3UviϹvvx.T9]D"L8itc.fwu˨dÛm]ifط k5wTFauR"0\ t|7WYr[ŵW T`sL\ lf<^דD6_T\~VGw%+_ș;tӎ|k + .=EVţX:'Ѵyy'wT^YHj2jF)t0QLݫ8 df|:@Fzb08EX6iδ]j+ w mkY"1C5}M@2ݔ6U8VINiVaՋ=/"y`W[t5Q-,B7*JW6s&Jj U^:u_h(e9rf &q X5Xqҫwg&M+:#'*C1Ŵwu!'LV(«0l1TZm\|@\\e gp^ݎGChLVo.EDI ғya0ӛ 9"apҀ|6FugZ{[_ iwlHmP) X,DQI۱8'Qb˦LRvdEϱۏωxטcZ cRv7~ i.4SX܋ C֝ 3Dǥ~[V"a('b <[)El1;U(]( |+nw(>Eq4t5ԔN"<_/ؔSQ P)7vt#nޞqMFɝ9@w9 +ޫn{o V4L]cɡ Dm9LŠnǢq#˶mwr*JX+יcL"ؐTe-b%@مy<@wj q\0(d-b9@X řZBe?NX vԴ؍.0I$7 rB?vdϝ.88]v;T(ݒD $hjZ^`)2TVA3SL(S85 4XVfFW. ;9@Ϻ# e:Ts{AedRD,aEzu;)$< Ye},_S;lMK<ƨڷ/ HfPrޢ* '']P(U^uVI6ٚ5\n_gCZaY2dߧD%R )c[kk=J Ie} LE9^""_uG)qު?֖JbyqGcrp Cnr{>}`QTWA]:ُetNޢ&F+n>A gSa ] H%v2-'n=qC 4$  NaC~ 吾]$! ؚ҅7կ՜57F@s Te*qޅ\ajg<Ԕt#4|_|/ !kyj/ZQ>}Y:$O=cM[c~U>XB͐05YNꏂ{T*,B6,)BeݸR(QO mn$7PXt|qF1\|Wsʝ W4+˱頻1+ |NJ˾Q=PZ],]j4C zz?Ċu7'3>۫-OμIl,DF$_h}19Pp0buNc-wZ!/'F=wlDuߧjէOu$3s^5i]P⒜.|We+ۗ7j E$˄fI5>g8s!NOPޢ+r嗲X~wi5 Š PVsөH8'j3jXM-`2,T! '^.:X[E=HhtXd5z;6:Q_JJN_U*tc쪉SM"Nk/̈V%2cÌE c`oX[Jгqe\p]R4fQ8v:&8ǽ &μwjg WYǸ#swm)ٰF/`GbZV0hm',ggw:rJ&mXc- |͡WRmAoj ``"+|Eԧ(=LK s8t3%nܞpukCCyJvMdÅ1Yҕ6N#yH.*z106;(A(aT͋+.#q~ 8dMzj- >atZw+˩JFb ꄣS^GF|-NNTO&W监֦|320`g#9y.sn}+:^HpWhdI,6OaXNu卲ZiL^Irn9< ݿuY6GzNMv_5Fy&N=oۭKALY g$ '] 9_ Fn[)3ȊRCA넧g(+D)1b0ԉj:RYLNrXhojV*~`sQ[M/!1(T"бsP xk|>B*beu{ 4mM:tk9`.}1JgٽW8RڽWQ׹ޓ< $@X&Дɴ9b$ c[c ]^sxNNU]ʼn,9ЯΠևG)8 -.AC`z0KL K! H"b\ )·U]v&bVT^.{S,X4*X&W3]Pc6[e"+F *5Fa|Birb6Iba*I[`aI4 g,?~r7bl>{ڷQ( ȳe1]Y-eAM^szP֑%@/@ Ee2|PAmXZRՎCjOg/<~`Ij֦t&!G/ , ,Yeyid"}/U1EV~ۭ fAբݫY`ةats U;o>_i^7AR)JEI|y<ÄgVDUAVOع/guxRƵhnчWqL?LEb9ijͭ&I \@0V@Eq}~?'L; Р`Cy&}j5 O%\IgO8@*hsq|N2_֧dVՖRJs bL4<4]$lΤ2 Lrn3PHGc/ M y'߭I^(#$JfaV3k>t47;SXz{\Oq@&9붛0fux}tplՒ^<%)N)2/ғjq& P>ܴONڄw Ĵfetn0p_v3Dǥ$ܮg{흲2=!Q d-o\ޕE0NZS;Ag: k;`!gb SAxGnEj錱la4WlRsre;I5-2  !}I}jJ[eJ8Rc O eHf4nFj,5,Xgӷ*«¸&I`/)&؛#j}}Ήp54\ډFt<T]/M<9'ڵ'+uktFK[kS)KդL' rno5 *g,EDYȂV12ȨM61,:<2apxb=HBPA*e +7wL 5/)ӫEYemhKC ep>*[l[eiL`ov!~MPLc~Qp BELNuLu`8شi׈( 1 'P)$ZV{S*_`&:"GY.Z8 k]׹6IkdIߣ<ɦR6]{LY< DTZZ]^eD ~I'יYs%a\m'($ZmrN_9U-$g(Ց!|I:5.A0ʺ*s] ܰFg]S.ˈiWȸ8=I+.IqU5$OI?lX+ET/8W}+ K;ggwvmXcp~DY*rC:u& 5^8Xzʐ'I)@<Pz@z0@ M#s<*Ms\A|@iA$Equ,x99  mPk9KY\vHCx<) 2⠦̴FA$LwقSj2]njaI0׫r,[oޑlqot4mV޼~g  ˛8p [}_-XdZ%& %LJ\QH\ynZo/QBo۟0%MFdb *M" \%R*FQ"3rj(|^-}a;h&Dǻ06RNYdo:f~rqGya<ޢ*e_~i$^I$vJ~! ,V\ɺz'1J&)P:8Ь<2$1B6CnTAKͷv c BJ =JܰUCI.OONHOhv<~ 3Bj꽅(RxI}b +%N|֎/P&CyJv4iD2Q,+Ĭt m yi(Ը1мbQr1S]}+_tkIBfZKf+Ojmj|^ߋFS˿2ѧ$V Yj^l *(:E.]r$xγ"X2k$AfD.R"P„JCJ)qL}ru hhlS]:.s3,B} )oj1$&C^H1$]XD6KH8rv 1xCXַL??"3EVXϘ 8h&u*mU'$*Q$A8R)J(pMxf 4D\Qb H qW*nr+~c yU5G%QHf3\%CR@6<8X!>΄ah=Uw !)YS0@B^:  گ55L)cc9fss2yOJ/DŤμ6n7b@S6^u@\Z H?vhV 'c+x9U@I{xW2&؏^}Xގ[LFϞ^,^~|iUz5WWi>~{VRNl760n x*%5;.ooFs! 8 WD BBkFݥ ̔dgp̖ Y[pݯҌЩ_a0ZKyo@)PMzB݌,I%S /WݠHHpڢ(Ep`05"ApTCc=gS, OxY80ib 8Xe!A3k-Xsp z m<_`'g VՖZ{6Ը<#?La ìG<8 bRpjhl8i2)F[Bh#tVO&`[d04$p~n`u#! twA B%d _"z4amWeO4QiTnnXcH#\;HI1bOƼeD$FWqU)5K%ԛӇCkH "E4cmw1ڬb`qx_/4TYPm4{]I4xmO_(XAxbGT6eAV $$^Q~2Vl cnLzk̗j}r|;7F?փK?n%t:eqS@Oas9?8#gC{'#lL:dL8?9\nx~y #aпO? 00]pawc/يc V,qec6?廮=[bGL#όxqjR`샌m{;) ̂9K(| <}D7?@qhsv~:Zx+Mc"$Q$OIp1K*Y#2:<˧Rg5zM ~Z{T_xY,ftFXO/{И+:1Sʫ-`)-` t_kjf8={K݃} %}3sws~zwӁ`-!+0x $1<'Ir13)fM 5bM E!{L Clte?ᶻٟ> ͇lCm $yUᤳmjhB4$A@&Z"1>0faQG+IS)=&F^B1? k\ےh"ⓒZ%D Ι<8R/8 ,[jD- $hoap{XYӫ"vW=` ;y*\Đ08Ac{hp;Ihɂ&$0Vi8Nh!F H2()!f{k[+̼B(Rp DVCIrqh59Pp= ]Vg1&"h-MfQ]d.4UWo&;Vկõ fli#dmٕT3D6"C̗+ʻeCPUWS;7pU>RɖO=Z:#ކ:%*䢁8WQ_uk M`⇻L)%Q˂56}=& |\mHOYT?=D5/qO~xtT8t352:PkipRiS*iJNK +M%꺋ԝ/?ew;ܕ)qýKSMYG仂ii,Cb0%*bP &xCQQKvf;(i@#$'Cs:O/৉>w/z Re Y !O2 w=f ]еPJÁ9i DOz!`07j5Xd6 -o P#ς{޻աO+K/ z#VJ'y$†dvqg52roq)w>u,?珛:@i¨tW۹"tl}*؋.&O/qØu[E~Bj2r1c-ޘ9sPaNr ,=gdczS@үzD{O"|j(uW7?~uf1fb^Ż^60:ˣ\taڜ{v~Ǜ)g#9{Iq.%S/rv-}: wz1] :y=ؐvDWr"n^ʴ+NP(ep_ :~{LѦ'mx fùٲxovaQux;uR;2`d)/-+: !U0qd?cZd9s] "trb?L('Y5b@X_Yނ,,`\^;06eyT$'}&MԞii*H/̂;K]V  ^Q ӽp!t޽((e8# #7NGㄒKfD鉒2H/s] :+C#01]bwn![I5G.J CBHBL;FX6 K`W9.QR :8UAF4Wm%8FUGȻ!4p Q\BEfaqlSHan+km!(2s[9}'e: YŒrֲt(Ł\xЋ˗V@]jJlRS|z3LsцWiԓ{hK~h㱓]^lDv 밣`z)d",s RS,z me\beB[{½l!9MT~{OyYN!'=GQ- jh_|V[`[M9N&_abxo4.={.cs'ӡ>}^E~qOcd9;,;yba'-EZVG娱:Q g#F8c;qc]e%$k OtJh6QNa) ffh $Vm,*Y>9J;{E-fBhE;Ll׻ّI &=$P]gݳZqO%Pa(qarp, Nx]q9Ņ&oƾ) -T%i":"ASŎXe'r3HIrG  M&_v!" nFIxAZtY Hleq6iK:-;^NOF]1H5Faۯ-5KK q:ڢ ,Lbם^ .U ƭsg FyQEfc F]]7 `Jkhx:hCW=5!uZQuĵ]յN9MrX̛a(*>i.Dswi 5+h*Ǭ8)Fq XYzEAjhdkOAfs 䈊4Z"QktuX{֭ OddM2KYF x{F৹ IcNc;}XC,rdI`-t)X)U Cw}^s"H￰+z2dSV[;\:RLm0R &[/tKWȉ=㟛so J[x`G.DF;7Z`y0R4 aN?A,7l6| 7{ 3 h`l1mTu<*" X~ ,`a䤀W[n'ϪSjΝN@p`GBU0o]ugL0Z]PHhS{+Xhڒ]:]01Ӧ};aVm:*B04 h#1 S6qeZG8C '\>}; 0e-^'PՂjn&3tڄOʹt4i"Yt{? f}sI٤R!TqnRDS|)~Zcu5y;\Z@zCӢ[B L(iN+ <p(^X5d%&+a%jʭfȩ%|~*qAqD;0{'Er _ĴRjH zp!&UL9Zu!iSm%T`JMghE'X6FL|`m>+ /j>!/?~Q_.h@[y~Z^j5uu.SMyz4W54ę ࢠMaЎaoRg f $o]pV A㺞7ch1=HV%-{="VӈX7S4zXHْꭐi,pc'7ѦVbR {r i,أ3)Va% Rz$Mᘤ7v{)l[{D>DSwSsBbՖg0+iLxɘǚ0/-¼dl1]ͨFS{yCISZj)xĚrU,eY!ucg mޔd #٤=J <=,G:xHҔqk`jbNl&hury H,4 DqOCCJ/֢۫ۼ+`q\m0v8gZ+S wKϿ¼]+SXS_Je32'H LS >*:pru^E.nH=!\|^9,;J,%G($,6{:`0P7B[z+3y*\c.5RR|v7t" NC|,*.VT9 ) ;.3xGS&;~LWT;Z(F辜cV;0%\֤ lzK ^Cm$E46̵B=[,?D-ͣ1\&eG)ӄ{M$7)n(,ʤEE ht:HheIQ+a/іq`d &9淂E s2*NY@AK>E(x(Ǵ+< #,:l߭1o*[t#TưbUdYʏڱEBEJ6E8;xMK~*k E"->RIZ5GLfв1ئӉІص]ã@wI1l*e?76smiʒ/u ]<&ũF m._L _pPu*a(Iˇ5I;Q52ӣ+|Ô vپ3 S6:N֢/wxY06yv1,=|.\/b+-Hʺq6:mZךLk`߬*ͅ$}G.F VJh}JbGBՈ*4Zc'vMԭ߾(+<o<50Njb\@3Ksl"ҫ/<*!mV'Y&i~b~xwGX`~!^AHF_{:gW-s 2s SyY$Z4bg%\*,uW M6.|9y> '{4=+qOWG!;s~7Hm7߷M)4Yp?,F#8V_i0Z:%׉&u:Ú0x\]:r~i`UݏPU )U%(U3\p1b"Y_fYEW鑬c=VfY)YHSk1[)ήK4~{e/1~7zz2ˀ7B%K<3GgN2q4olsJ~yFWNmi/1l7 Nku>=E3cEYY..߸~['QЉtH:\?̨φKʶ(* ߑIL [U)ț'PE&Pu%ۀPG{)ڀgZ{.'kGD[?+i_˯ b{ K=p"HWVeVᐡ'떍 D* t\dZ,[ U0FY? e$[;#XU$},T6tW`WSqH<0}BN h@+eɬ`OPb7TY =h6N&Hs\. :ǙdW-l'i,ŗX+`G,ҫZh2 B6SOKU|`u01&FLnt^Qt0Yr|ZN?S$uG6g8H&uqZgHIK,?~[aB{2wbU$Vaǁc/+,Ah)&_a sU{t ?AԽ|v"del|!JAcC@a>X L%)Z# 1)3':ͣ&*xE1QG!2]C6MUq`/?~!ls-rԗ jv+?O\-HF؝r2 מMu_JMܾO\Tŷ?hN; z A1dP$IQݼW#BZu9{?;b&9lUr tܝM&sFP ٠r,^hg& [ @g88e(/u)pwns^"gha,*0PJ%L !31`YR}PD Nk$٪0ҙ 6u\Z."TX\P)"hĺ0AxRF  ݏoσF*2[q:,R#>r.$lHOmͪ^3@\ 3x2a<̂ߌJݏTXf"()F8tH2"^'̰Litc4} +C5Q{Q(9C 6r̘0MFk󉀔L`p5]V+E -Fx=.cҕft>Z~A1`b5w_ۧwln8v|4Y^´/qKeޑa-f $wOW9{n:X}z W6ۧaM2r< %kp w8Qe8 &/hl`ZT\=ʫL GYxWI֩/M,1H  E:38O46p*nmCwS3DϢeMGb^"X(G$9ֈ;3%5A8X&UhlK7M+& we3`dvx6t$4Hc,p~krh o 5 KT3Jyݠ i(/ޑz3CGZSf3pM@N7z&۷܇jSդ\eRa}mnA[JLӄTfEs">g`1go>. J`I>p4i36zla2Hk$>}~2OGc:4-~-hjYpe)n@u]LXQ]aTpt.D. d \M %TRjd&3O4IML8ο|9bW#Sesh4 "Efet Got6ӯ~7H@ŮY#_0M gr8_݅ijҰ!x.)UHTX8RprOf)|6m>,SxAXIQSlH.aRSm1$`V BⲺ^#b?㰃yz0.}V  [Q-<kqxp Q RB+_ e !N[lUc=> ,eڀRU#?&)fأv4Gxװ m Wp{w}lu Ao~luc WȊ+4 AWnL!fb*5]iH17ƌe2: p1r$c5ۓQ@/^'n6W "9-FD#FAO ݑZ,h!6[szLLa5-EJ8|xY08䷳krvg'p ㊷&jRAT1d[Ef@015{}qۨ+h<ZmIр^#MѸpY&C'Mئhܪ߼ Vcs.XlKU]uie_䁫ZA=x}R䀙<1tL7T9Ž'H S֮?5DN^caQ@|] PRa ZQGTdR!ZAe<8@1"ߎ4LҟР&צ;#pl&vA l~^8IM~ G9Eë1`"NP+hpNK1-*|S|wDăgxJ_> B%0~xMw@mU/~Jb `ױk 1紝z偂fsGi?5&MU *UoEQ)ْf .sx$b"pFZn+$7A:;sW*E6%K E;L|@)DTK̔jgO| //N ^֭ATB/v~_h'٣eP'&%/7;e2d)x9F%=N}X:fXZA\%9~77ֺ*Nhn AxG2h=`KQ dz9h͉פ'Q7CBF}SЁ8ЭV"vDfqzǔmSHjÃۦf]uxኰIk9ԬmjmX0s' G}7FR%+YWR ԭݍRswuytsK!TtcH=^Cs3r}Lͬݘp7rE*RC=ѵ\kR+hb^ɔ"az^ƐQ{)1'H>R5vTvl6 ٹ[\g4-OGEѷX"@)k7tϕ;Z%AQIVи(:J6zj\Sb7/!f; ;Ԩ{,aʎ ͭf:V)u~x !fƑvc85H(`IqQkUI6[HOi -} duoÈZ2/A& -=~x(|0ga _4p=Rtry vvU)FWz?Gt=* \> 4.>5[O*4zxX:wSz7kO&>cݾ(?}s y]wkC,&[.C,Bt<| bf*͎NJhRzT>̬)܁ W<{aF  Ă5s\{?X W*V ʶ(Xϵl8AЖl9J"'[6l9}pEkO7 @y<,3-3%Kfϵ݇a[ 7n`Cˬ|+8C 'N֠XcPGmȂ&~ K̦dM%~˯sC(޹ˋ#/N6N˃G%}Cq%'N>T|W919b,j&bx9=s`T& ;ҮkdDL6lQ%4Cvk丯[r5hd-1\H55Zv*$]mF޲ RJ`[ZN4Z#O%4I#O%~m gq},;ycʅB~>įex~l-\ F Cm?6bݘ`TA=W YdhCocL:9M%)T/=Zf"iZf" {;pЏ*x6y=T Ae2h!im6E g6ſoW'!l#F" D0(F"NR8ضBkN3ޏdxg2щy6wmOhxW}amS1ִpϛjΠsrܚv7{It[ɛv"FL;W[ހ 5P}abtC_^XϜ7EˮQ"{/?}[t 7ڪ*LS#ݿp|Ð*[yoqc%] VH3kY.ؒ|}Ҁɦ?EzYWyV=XzZ`sjxw"?6 46cAyPx,#OQehelc%&ىڳK$j:Lqf1'64Q\y`s<>ݤGwҳHT*ԛkN|4h>y\t-FLٶ%4+D>8<7k*]^vu_Nɮ7*7K,-""[Swo=Nq.Ң*dKf@d'&Dsn^ALr7yO[w>vzk:LaUs|8Tp857Ɯֵ"R'dh>d@l\nC]Xe =hhSGM9&Z+1B@MJ;EfwqZ!X2nYa UC tC\R05l;yL٫ <'?HlT b 1iΦQNԛuEhCFר{QS-P8i*>E@XBCYVLY&XT(&e97ڨ3>#`AKRFD,Ь*V3MZh)6DzxXC+(DUjYܳ$.6M'$˥1S&&{7} K/;'kNC]ȍ8$z{{;.W~OvZ+JB)Y˸qub.zCbs&rt\w>~Z}Z{] J|\3wCE~rbÝe>ڹ5//aqmNqTXf{5蒍FFA8ySSlMrMp$ѥ:jz{MNg.^y{&EpD`:s_MXw>J;_e{S|?oF-_f]f4Fn\5Ք7o8\]*RWͅpY_F?n2#L* aw:jl$R p̹@IJ-2pJ[%<8ET%97$ܻ*?;GkI!߸,ӂ߻GH6S#.j+2ĉCoq M'st$o9[ "n)횢>*pڽ2X I'Â&o4|W{)D2GSK$sAɾo.#~$E8 \ 3D3OշܧsR)\"sݦM&N6zf͋?}ج}<u-^] @O/? BiAѩXQU98ro;gS׍Cx Fi?tq%޴w@1.o \?;tƖ}@d];`U'qz 04Jl"?;csnw ?-rؒ_ 9X Zq6bLc[ ʚbb= Q_8H58C ~͊̽!O;WAI`rG 1[[ժ]1TvS6icQ9 }Wso4;.w\g}УzZ_-`>ٽ40xf wG"\ >@ٜW~Ӽ;d :8ai#RyxS6w~/_Fa"yӜ $ޞx59;j(Px5o_}x~<{翌9ғ~ҪےJ9R52 Ec $\G\:i_>hkPiG(2wE s3?"-j{=x|k1C(ϕy1^za.~)1h.?1K,QuB}hH0+@My~{ϏgG=eE,+Lܢ3Ydb֙rM Wg0j0AGry`UzxqvȆpDCV\l]-v1*ce1'%CV 0ШZBkG#XPRo$QL&:vν,ܛ䞎>mXWYVWCՕzVZI-Vn]Sg#Js9rO!y -[:,(/1F `̼ܡDGAA"; *QwQ7b+*`N xtfELEj9SBQIi\B?EZ )7̙yq* Oh@3@1|vsfo}{2b®~>M?br.ը %H%9v$tA4rBȞ}(m/{OƱތnA,HaQ,9v{o)0 +E8KLFQVba?(?8tDyYQO[/u2(V6|1ymB6enFWYP!^M[(Wɵr]C+uR+HItbP̙f{4aaۼ˩aE*tLؔK*2~^\&B"* 4[f#;33vn(#뾵iw,Dc'lBwJQ냱s_ S I  T]NA`an]XQVϹ Cd-50@qs("KYA(Ps?FcXmO@g&2)Oju|w5~n>g˖֗/?mo}=hF_èO b-kn-u?ҚH@ί5o+AQ lO#UjJr֮ F>E.ANj3 Gm$v|fHP}LSqG+qq58y-> F![6X+b*7:l*F*Q% \IJxG'q![q B+9U9.O-ЛP;D[ ݺ ~4etGSc Vŏ}@ӔeVώ@16%QMbjQ^A0b-$S#ۋ\졆R'KrH\9F>{la vIj6ì?c^SCXz݅!*E} KM:лyj 低vs\ v{} -:S 'Sc C6rrP=aWV͇fԥV+FQ*}ގx}?J*}wi|kFfi^lY4Vk{:eR|DǪD7/f'g6k@ i;wH4ZR,URT:b%Ԣ]J#R\QwMqv?A$ Ϙr 6l~7o'RK-Ys)jkasFZ* 7R I*^mgv.!*:pԓj6n˜<[qz' c m=4=l4;>G/kW0 |& ^N9Flo9X 5IO: ?msmFw.KV*IڂE #ŕze`Dp.Ujh Y~!wL[E@xA°JMh+<&Wռ{H ljdЖF褽F%-h`FK3mɰ5ޞ7[:{KE`*=чoXMɩ*Yi TJ KgY'] hYwU>qzgL&ӟ<]jdGH}R*kZQ(ɞ؊:fCR{-ԉr^N@G[{JҞQ/.7cP褈IPhU_Y/q,cJVr?lQӦGxh755*0Rڃ L5wfGtB~=;7u-3mmˌn[fmk/WTsxU:!oص.E% TN(WK!5s(cs:+=U]>5+54cR+;ڮ|god.q(l'aRὊXі"!p⒝dgmooxzplӂuQR3QԌfZt-92JǬPv[o[֝hewxSF{] [J9"xp[JnR#vF3Q&SIS;`]ī]߹-Vq2U/IQS n#٫DWLSC:dڳԤNRٻ߶r uVDR*O]d63sK9sb;>vM8"P}E .;2w>Faj2>yMP(jOB04<^U@U/@',9Gdٟ?~|Q)fY&V~)y\:$r. &iyITt"V~ѓ1A!O6/&u~va#F53t5Ii]@z:Nz Y.CDUF~WĆFK- ora Yx}! Ǩ>jz[Nu6K^MbjXmjfq99sf#ņ/0 er)ܾX~*iLV^Z%I(Fqǥz,fA%6Jd:r('*DK8K?/4p֞COlCNNFߎDD*:"ePʍr(j,Pnۭy<]"Db=6/99C"1[f#>82 _>C]30s6/$tϘ 8g$t@c؃$Gם8cYn ݸht&ln<4˖ R@ЍWq~ u/я>`M ̅zȡCxHrO{0 \XsS]a2r= orV4# -HAp l-}l5%5=V۱dڶa<ǭG$lD!~Z μ2'F@/ӲCOv)^*Ƌ }c]Ɂgc59G3Ii,Wu@|Ķ?4{Ch_B6/g j6#y!*ݱKA#}`hrH\^2}HI[,ܢooA`EfGO2kŜd eZXNnI5,0\6k{aXF`}lb4F]~At Ki-ɑ>Vg/%Vƴ8~pG>W(QYX߅f{`wfq- >L0Сv)Aڞ]Fg-(EO-yn}S?{gN,zUEpU/[E}zD~zwfV.Zͫ>x墀OV,:ơN|T5F-DosKJ0Eki1c29eYc~|X 5oWJh V%8hc0ʏµߚTd׌+ȉ3Wbۯ)ʿ2qPh"7*0S>tx( $y TW/ waTmZf<+O 2AXAù.ubFs"A3s8}lBб8& g1 xA%sKͦb@.O=1HzQg.;|(.\1ؔE9e?n+G}X|s|XdIVx\h┠xCTSS,"$xR]/g ,^j5#X[(ؠUѦM-yI#M9 |_EV>I%5AGToQ)듅ek;XfR&&CfcVbJr\lbk{}^j,ZHeKM"+P6/$ s1#+}JP ;Voi<긫ݏ.q6x֫nIco?8c/AFѵNY ,V%pSЙ]=ʘhNs[ODz1[PM"+,)=E6ў4hh255b,k-u*1'QK #k0`, B %JCʫHTqO}Z74-t+`BBWB:cboy;qY`=MQP2E5+2v!n 9Y 5 %ElZaZz&0F" < .번=)ӟ |c횼S|r[O>DBS_:5.\-GɮrxL($IIԙ̈,N,֬nM)TQɻW ( )e-СM×ɓc w], ɑ>?{Wɍ O^ d&֎bˮj#Is8ߝh^EvuM6ِFI d~y#x%O쟨 ɵhϹrql΢oezRn"wINN{aTvbdbe6MK9Gw]ٔR "UxP!Q/I+CM$rZbQ&`ܕ"$=1rbY5S'0't&RL)&-fuHsfͫgK4U8tZ)\URr""Lh QN^r`͈9g06&4A7 j2T'Fd >7`\(}J#V 6^ y[Ԓzhi:iy o&ʪ YvE:*mI9%[DQ*I9?v dNf[^AI 5rgs H׮H{ɂ8_>O,\CPk|# &4b@c#u~\DS.XT1*ToE%⫰Zr+ZRuՂ0k>|Mz+뿯07[ypBQk߇BNaCQ'{~J6%^a&^{pNMl%ɗA .a^>K/ c9n1SKd xHɊ]_#ID-ÑZd: LɄ$m_'|/oKe2Cz1i9ff CJJf^п3Qvu1QIq;ѽ҈)4ILp BB5ܨhXiʥlJZpG9quum(U9k^O8 A "z>̨4U'r %#էVfպ砲wD1ynU(͡HfM#Cjj>/xԜ̏"2K_ctߛߑO'l<lt \b5w&*Euq>YjHtǹ)he-%u*)ɉP)k)kRiozɐlԤT=$T0[ae;1kd#L ,}|߆I7$О̀j̞^z <!`Hl |E'Lgosy=\K(d^/y5)r;xlV?\WdQ&T-yy5V[ ']܆BRp5DWMj7OjtT_,n%Z@Pyu|KP͚W&N+p3+4AjS= ~4<,=*$B[{Vo27RI;X&׼y^.Ί1fNH&p럭bwnX"[٘E.ݬy5^7Oj#j (Ќ<@X=#lgBk`ޙM*8W]#;YA1* ;9hy{p.]ݬy%0Apxr S iEgzﮨf\_jf{D,F'2wSFꉺ3Q, ZM9QoT6t9ac?bm4^4?l`8F9l^ml6dTѐ!Y [0VyþZB婽.f@yoS+2?S+RSͫSyFZp Px%Xo<1թQ*rˎy(AƕY%V`|5>D_N ;7BUPŁy)0š[IUd‘"کS/;^yHȮ"%Ϣ6+| kAyo`/ʻ!ý^Yޔ/W_:K^yՁ@h}XFj}f7e])QIjg !w?iL(%;U>u6V5,SXYcA0-B{C!sNCALz23Iaz`,{Fk5?؁sTNvҼ|sFM78=noSyo YT7pUcၶeC;',Plj-}1h3m4y`s@uZkm^XPJ{/=')ޠxcvJsyt֑Q-i^sJ:Lt%%h[II9Ύ?;:i[ڱSNY0]M&;) 582I!%er)G 3CF9VҪ<0f"8Q]7iBxt1dKfG97=67e9֦3!*N5-C[m#Rfd8rG D.ȗS5uK1|[{LbvQNR_PA9!jY)2e۹ltH< HkL Lk}Fu0wzZOk曅?._?Յcn]/^h+ء j1]7BVG'Ӵ k5&quײ\2OeLeLGvmW5%^G ]zonXrMԜ3~v>jfQ4gTSZr;nڢ :zΊ:%9,.Dr-39oѳ\PrqDemU$ٵ >^tQN#4:784F4W׎av&#i ؅ZغVCH[Q˽tF48dc &/ j5?Og! 匒<9cϹ5|[FVC?t(QXx> {Eב~>_#(2d?BBZ8 )Zs.I,& _N4\hx΃436Üax;=k-B u9 2-iZ1-fVJv]dYغ_^Wl`KZ_y5O伉m2*X7q, gQP_|hة+E}ŊL^bW oxm7ܝB^7,Nqt*wPšn@ʍcD:VawMӬ3pj/5YKVrf@i/&þZ jX,@iiaJ"á59fͫ1EIے>ͪv-!IXO~}V՝\C~iy gG3] rZ|r7B@ET|Nr:cRߔ nnYa)zdkZ/=y E'ROǖ>wT:3̜ޜwVTgҬ8yQ"o8F-ɮsV]|(\0o_7}->!gP uF~sbl*.UݬLnE,;dn<7w5^TEK{xdyU|@0xbh=tO;`[=ehe&S9߆V¾|w+vI I6q%:o[RAk=sjHBGCMn|]]^/%lD/zqntI(p5"AcU3nP"mۘy=Wc]ĈH 05szb8i3F!Kelq^@f(,`2. %6T?n TSz9!6laWDj'l!ZCvťe_ >W) :19MN Nz 9-L;/ځwlr8jV]~`,˸47Նs-Os5jg6wI4~B'{ޖ x('m 56r`VK~ OX+s*Y-.0gdu b|)D"3bTGta#uMv+pE~v(Qs3nM)v ";!RF Ȏm6`QekFKhY,1DI^nLr`(t$&i0BRފ?;r2&6HY$$Bd6E_\[kgxO'mF²֤9mhBʒr)D(/MVPZv ^H.1h͉Q 1tASC)DdtP- /@Drh`'˓.XBF'gzG1VTZ|qW3^ Sި5LQ, lg{|oZGM_IJh ;`Y@C v 6&vd-fڠ*j(|+C+#=QدxҼ=a׃EW1[0bGiuZU?̇s| v)jr<̥YCz]Av9Rl8VJ doxOi}A(ͣF8.x|pAzŸ|L~Cūʺa3h<C)-ʯ!WO s;# <5c+k%T(V-,#LSG:>S]dZ?^kxyb_oG{jn4?wy>2t|߲uM:..oEof%~ ƀҽfIr\Vrџfqy?\"+v> dFOSnʕlFdDڅKIN;Gz Lk ƒД3^լd>Q{K.Wف 99bg$3vp⥆t +p0/veRS9R5䖢gO] ;fk9q o3Jt '{жY3 զUgoovx4aMX{oǿx{g]v>q:~E^`gͳsnJJn'aJ|Nژl~W4R&5G9uMHy 6 +Q%mXh~a9/nۏAj'5+0)/VRp"a)pƈ:ZMc5EYZ2wՑ3(31,$8a\&&QŠ"!2Ud^AEy.@!*~sN>Տji; fIs5AT/x&LQbB֢.PlQ-6JmQl;>Y$'ڃptSdRS~h2SU'yN r0z€փْ3BV+7D[ף|>D/67fWԀx@oFm jdesFp 7!U=B=Lnm0p?RZxƪvG9pCEq-kQd\kQ -x:J"Qq x1JA@qX2$l L^c4"gWv p_p&4p!I224l> C}bEPm)fW]wQ!Evx,MuN_}0AyuJ02)HEI u5 8gE0,cFW&̖ [ CrQ6F܋ꕆ ~w[P)^*m"FO , I=v<6 Ȓ9M@/|$E0I*'# kG0~I`}Qj91jLFbΒ ,>I2)8uʹ;\tGkp|Fz&7 #EMxm巤P!tG U,[8pޑ ?Hd1K@6e?1 E!R)KǤh߈@؏aC%th(RVVQW  9:Jƕ;/b\6h\ąHTV’44gjF :932f\$jM+FWup5{wwɣE^LDT=(~NsjS{jj҉It jA掃SɌtArY@ICV(y/hjt6}Q#Φ/YkDDLʣb&k(ӒntaLk\㸘+un7bP]pIڻMvfнno'=?4>Xc ʿm_<41|O4\Gj'i)S1fbwv2Un!UPZ ϜHb+Oo}矾vH+'&OrCMQf㓿4ꐸ)-/'{Ghy5VӚx"A\Z^Zsu~y^L:+)3gy & 9G3z/}h0~%#{ud,pȽ,,%IHu%,:/[?AS+}X\1G}}~ز|ag4*&XmH|v4$f QCvrPikRSD[P@Oc R_#uR{mh&]Zj߻H[H.ը5a(l3o6 e$P_uK6еq^=|$l8A2ꊒNƑם0u?-7oُ?F1ݏsk5"!a^zZkIQF|SH| )'O %u,`,hBs wO_N*m2anOr۸5'{PJm1nLJI5QE3}L#%S[HOz "$T¹ m&}b xjSAHqvR?0=H2&Ĩ~WEqތ\FbW: _:(9!%R0CҢ ѱ Sx K:U]iڂӤub LO2Fj yM4MFF Na*@xDHY3,66&f)TދAZ"HeJ7VR +N=EtYA,VkH$HIѴypY sRԘz$\MPQ qR%z %!ܺﵹYނu%%/_zpav!vCVZ+sP8bd3B;0*]cF -X4Q 6E02(-c1tC6H?;y8@XOk U>N#Z >Z}onL=>FV{ĒKuuA3/3A0v(%8bplZ8g4xDi8™r#Q R6ÚFSDkuRI.ѓIeж>Yv_9ofZy{LPEpŇF`+pAv%*PO]t]JyQĸZjkTlEF:ceJm,HWЖG"A%$PJe+RCݩ-k EOF#iƂ썿uenvu'O~c|ۇg@ˑRZ?^.*Lk^>ȟ؏wxa ?.d|?B}OiQ~u. x, {팄?LW6@t~w?SI{ A.~#%<ť $J1Nɋ(934K>sp?spZ;\E T@jHa#0lCC=x'=Qd(M_%weș<,-j6L6>NhKnun'w1U5Xu yh1JQH7 S.-DE?'*DEշ#D´c:^D@14 ޲5>#W%UkM_ȷh%05EPo}pzXBΡ[- SH4^  1^ަ,6Լb" i[A%k\)Gp`D*C0D0f\'oIPHAڊA E1Ev G#LMhM'm֥X9LI fv$!gr>72IB543ItwwYҔ?{ͬ"z+/!'I_'I&XqۻI|[SheJ"gÙږB˭ŀp Q0\ a002@F'}+YF4(G1"RBZcPXpzc:o] 3ʿ_D+[kY%} 710d>}OY8<>xLܮ9M5^̺ y§Y؞+mFܻ,m0g~#_}gTkΜPb#=c}?Vg3ЧmfW~col^ͫi+"ɭCTܳDD2D!$uP'8MU;mSOTӜ&,P=4- @;IZ$Sg&tW^讼]y* ]Q c-KQ[/KO KS(+R%poJT*4a.I]; h4\ ="R'v+LĐ"u'PiaMc``#9O,TX8\'$8,P;iE"rXz@D1FRĈ;+חnnemB.jaRˑYfO5x-LpU5)?͹&OEߗi={}iq-3O`h+_2!@wt`zh0B\@M TS_*K t8A! Ss鳒, n`۽ 9MGD;@Tg,$)UD')le,FFirqe<)"6!y`eK@S"$<8U$6#)\T8 PM(a&TY)Vo1p9+)PeIɹGYd x+rQΘBV"ȎR7SFLpZ/!FWp I|@>z׫=,rϤ?[ % =708X \Fz\@pI:002Œ)W{T5U҇n rԁ acK ħKt|!jawY# itR,K2!0+|v=q!R<#Ä4r2PPeC q!M渃8p!}ʒNe[BYܰ{)G"Q"%.>VsT"[k>"OIoO7sV3i58;hU1m#H)3&Bp4>`%Y. EsN)DĎ(Y1213,R+eXXL>*cgꓤ KrcdsRX% "V81ELёx,:7RaF&)7%M+`aAT&DLXl%w@ I&SXm3Re[n@xf'Hu>JllJٷ˳ݻ{|6˰6j[jQL^I"b5Qfpbw#cBcCdZW82XĊY^[OW;k*B*%߶oPR PK AݵPT ”"c% L*[1Ș"{ڍKbD˝bU@2՝%^Yʣ28&E@DGf*Z灛=\u0LeCVhpok|yzFSb俷HFUSĒ7_/q4j]Vuk^,!X}ҿ\=dV40m"loaD^iVJ='Fsz2 VЦ.3 ^T9ZgбD.di0S߮*) י+R_rLl=^d׷ ;c0 ]&Пws _9޿w301r5̮(/7@l]3y5_3)|~˥3qzH .٥鯣\gnoC4wRL:XE#YEG/z5.u΀t۟/VA3wglʮť3ۯh3|]LE:ՠ׍\>}2_?IaקfhTP.(~$'/^Qu@_I>//|T'߀?a#)}c~׃3 K?{LYgx >_|^Nrʞo?Ys_&(qO0xp3|nL#S>u`_o TϠ|r%?07p,gϦ\_35=|w1_}oČYP\wNe#Zyl]}W9epbs2uf$ ʉE?f siSf,aVX[N`e/E,>ǐSl;·Bosν}co,p=l\6mg*<9@^VyCyC ^m7s7Ly긳a>۷o6!GNj_Vx!*@^4i{dSEM"έN8g#^}/;"3BQ#P}<[> jňVz\ɀ!dEf? QVU)YiEfPrcST$́4-㤾AH8.Tn2p˜;VlXoA~77KPJNԡ=~(^r =5>n~9 #EiRe#ƥV %ea "ظMxfo 3pn$p4 쭗qx4ZLw3aWt%4^Y/,HQ3 =f՜ Ӈ3Qiq7M˘uL]I[8%ѝ\ JoyS-%K l#@weؘ4[iޢ}V_}nI}dZ 2ma[!XH,kL!LclHaIC#"9Hjb!k{a yZM-ZC+>nC1n;:-θaؤ[ ,8i_Lض=dPQï bfLPT2dIDe$l;kp^Wp!gkXV/kQqZ Le ÜnQ 4f);[kd9T?Rs\T`"Mhi45N1X5ĜSnVm̧Q] l'f0 &6&qo]3N6PpX&v̒jLQ3SN~quHuݺx5,!)* '.%Di!9ń?@@lEMⱉD36c+Z11s].KSR(7'īCdQ wU3dXǒ2ʺ]"DžT ?(@Ah %$`Gp0OP '(#RSNP '(P +|ƣ($)"f/"Ac31pJXchj,ooqpPRQ%kZC©-RE!xe bB8 A|2w_>s8qPF31xdm*#$BPÚK|Bg0CecX1!Նtl9rk!=:lj)J, :%i9?0.;J*fS/A4#&b[NmPOm[[ZcVT/v &Q+\)*0Ͱhk$+o:7w.pJ GKeB&bĥ3v4J+i%X%8_0#B޵l7 n>8!x$b NLdA")xAFBƭE4&r}KR))# JAUDV 2X՗ һOQSbXׁ6*"!2,=(sKp%VqRfhAQ2ܣN&%"BH_xZctEa(IIua v?Sr_( iEE:P" Hz턱1 ^j0tcͿx'zr8s6 0_N%rW.@ȃ^&}`z%+HJ#qSI(h2j[D)d߆pIZ2;>LW" u2cZ#HHa8fњ+r$3LQ;IsU1 DI:E+Oya5&*sNqµıf_zYp,}cnT*ѯyp˥r%m-[)ZX.C"sޖMW o px=݇`Ut"%Iim+R6)A"%]6!h͖;L罒Fvnݧe"U`>mqu]m.)o}Iu^ao^?\pVbZ=nÛ0/.$;u1˻y`74м~tBR/[>.pmm !X,xwvs-627gg>q3Q)3e)ՔS\Յ)φ@@Kv@8F7e' % lNj,WB[׌_4!̗ ZFCoE4 FOq0mu)pᕐTFJqy>=784r*9Dڜ|=q6CQö(u   (-JHGcTAjĝ4MkA4Yáo@o;6p`~YxhJaEI &2reKT1MuњfRXc{g*&5ji"%i H :ѨEy& xJ (9 # Ih8@{Te 3HI} vnd1`t7 /wB㆟Qf rє@P3u`&怢HHG*KX_ʞ1U7kTLFþ*4GǺ˝&%>fD#C>p1yJ.>f^1T,y$rxM*l\r҂8˞GR*=):``t)%NÎ` &&wor^}?MK|y)R._k ђ)oQϖVѼ`v8I̻Zy, u*ٓT04!YkO_i طV8bĵI2ftq2˓dhC跩(Ob-8a.׼}\^;=.\O/'] :fьB#a##WGV8;DJ f.t)Iӓ#RIK$UדQ+&kNj#RKn; >.sfxghD!Z&(7L"8TܼcRP"hNUA*pltE3P1d^z~ 3ox(+FcM>j IgB"agg^^7⁩·W9rz"r4e[j}]b-\)q-ɕќ^ՆZG@A㼩3;!&񛥄D+bDŽpun1 urw?N7m :a4]6ѵ^qt`x"B1|ӉKNOd?!1~[uЉ 5:ӡvO(wdxVkp^e<+Z'@x LhrYw-܄5$VXxP.ks*t*j- 7u*jGE]7mzӮ;6© |Fp[BNgvdgE%0uٍG>/QiI n> DOTP OgqŵyqschU_]/;&k$lXQt4t#n&Q?>8 WaUtn2RZ5q2KnZ #oxy_y5lU/՞Z]JoYUX}GS^B9ctFr%EUChZq Iz~41v{8np,f)^1"|sU/bo$f:=[^9|WUk1){!pi8Q-r-$RP(tڼtEY:m83ūF(( hgY ?HQeD VxD$4k4U$t=RT[j6Hb E-6'3?bWۇ\j1e_= {g #^!㟾ffoTNX}K_BݏF$ⶓW?!Oξ[LgooMkB1nn.qBѱ.G>`q3S-ޟ{'j3NVzBIFJ#jTh\*L%JĀ Ҭ+=ܽ'>ޖ;%F6n77٧|OoCXBҟ~{hv0P m`X$[EI0 a5~;j bYlc!0m ox8*q8EMqǒCZ k6Y+:w{~C ,r+s*NqQws8G:xU[ˍxo]tr2\ڬZk醙%o<܏6GEϻ %氕戏'ήYOΙ=r˔EH\ЀTڐyS6vFgJ.R}ݣ7ݗJŊ^ {b4ZToR/uZT#ԛZmUjFNzH8{`:Oy~&$`$aOpBAȻ}<`,h]*MzXZvx-+srTAAJ3=+ei+JampN(^?YQˆb  2N& 6G'f ڪR!&ccRp|4 #Aqbcl:<fa+ A#ͤp#7$ \Cw="q?^5ْ~[c֫zǛ +So _3{TySF)%Bds.J+ї5P|NgODkeQtq~N-Ko=tiQ2KMI/(G(hA; "G*@Up^3k}N+x z'hL%l{p{+13 w7#7w(~89Oa܇xO}OdPwuoG{Z Q. hnC31O=(t:>񯷛CY.9yj״d]}wՃv;?oC;+[PIYS,[)Sq2>GGtJQ@(qf>&`Ƿmڳ:~y@wΟ~jZ#f?hе~8b @k7}k  7]r$w gN5Je3  21Jg֪Hϩ/\K ֤oj09pԒBۨ¤x2[:Q74X"bb|@&!m[69!rtx9K)Kan@uK/8A{ԆJQ?r 1+KPbYMI}A"a@!< +B%cj%jBQePJd0e+u$j-/1aŝEe(h Fe/|+ķ8 6 ,i%ٗ"=#gGnv8b bHV=DIHo m  ZE3~j~۸QI\3M-ź.F슫Vg^Aαxt bxlٛƮ՚E3v}~2-6M04~Ǖfh*WS?C-U6G/Dl`r#ɎwN)xR rL} Bټ[m MȦG;MV[)9S>혳Iͻ ư/Dƌ:{;k݀v{OS,C*TQ)أRopwT ey:=q\9dsf^,L76s ˉ=wpøU0n\\˟uFQFIs@3}nkM;0V\bĦ*4#y0Gq"z@[w%D(>UB,%Dp t" EOk @,paueX-+a,<=[[Os$!XncJ`wYh&` `Și]BI{Uijqqnݓ*D8 TNz bS>jZh>֎r6DGՑPVh RRc 38NKqlLʻ>0jRLbK *l鰢\ yA{͇qHz䊗dI{X,v#p>c(b6yD0o( rqثL/~m4>3pcaT vUZɻ|g_faV3%w0ɓͷ%WM ?YVG[2Mq[ngbw3FZQ~+74_a|Z(@9gX\M =4 AF/5` 4irMT\uq{Ex,3,uU p41*8e/ҒBaӋKK~N' 6۔YC7{&Dѳgҝu:@$cr!:k:rm8:|mF8/pbVs9rya Aˍޤ7؝*-n]wa^ǎfZ}  Y??LU+-w7IW>,/cBhڳn_m\\pO%VFxK {7+ ]?.pjMY]ו,Y*l$x4yxy@GrV'dϩik'p Pᄇ-p| 'x~5+A͜͡]N@4FB<|>\u۹tʾ}5;?jƵuʱӺ\;K D  1!PtDjpZ{Kk@$Qi Nw0e"vS#://S*e+m%,V*6b%IX.ؒHf湞[q[NԨ'j0T,$j;D)tLel6i4ph.$xz[T5^8e V<-U*9Y!:yJpߘ{IƹR`H`DEq.~+(=H.i=4WJ){\4Wf83PrŖ*\ᰏV m)4h:sMs(Cf8XJ*~4W ڎg.8[= [lczo^\//io嫝VЎ%%hB}<ǰ?)oS[)%+ZO-~>v_6{ww}y4!vբ U-Y['t[u1K;@aL :,{4JEP 'FQ WtTmΐ"QT2\GP(4.E*QPEPI+Yc81nQa4'29bSO G)c:/1ɘΫMp^22<#Q0D< ͿC3Oi>AސVH4G"GTɱCa>uXc;bl$'xl/ o5 VC !*x{Q6onSc_>GVriW.}KFMܟi߶=[ TX/uUz:߯R9Y4ݶ;5ywyx_oRzכԿ޴6{q]ye +DI<*NQUA)W[JB(r`oW)/}bMD!آ)Hwq"|K"FŒhҮ`m9cA #4\aT0Y3J=1Wӛ-u *4[ !BeUm vBpEbK5If]RBkcP+LLYTSQb~m)Now㛔fiϨ31uAn^~y(/UwAy}Og{2SE:9#vNVK3K-wTN|QZӋ{T#.6K\_>;%@&W; :Uy#C(@^y% cJܱJG*h`H3#%$Ӣ\{c2=2ղK욁kL[&^W/q59,E omїQ֠iF8faM+=ʇqQViEűV!?ɦ >-H1Qs8C̔@6T}+qT@Q&Z8ɕ DT僰g#z&;vE͘ ցDkuk+סzHa/LpZN[B /IFF;0B$;EG,>xͥ1H/58)h_#:>Af3}89iO5G Pv >]Jr* 9p,I0ޣsyqRv,'lK4C 沼gt%(Ե1=SKSM%Ƥ2\^EJ7Dfb(v^=`OnzJt+G#Y]v' ?I@iYGYrty6mw|T%ɺ\Q+j"͢17%B㄄ü[gj)i UQY!,ĥ̬ z.T W xt]&vJYbW?zRU5r*ԕǷ={,Ѹ u|x\ÌZyCo3Zտy(S9y!Wg7y0{J2͚*%B!˃v󹧌t:mql]?A5ESAm`k^U7H}3wpJ+O^flf`5⯖~w^\>#'"h/?<m5Wz1$"S+i߫f۟Jh(o"VU^ճkϩQ5V b9ˋ{[[1C%1{pD3SVp3|8||ٻFndW]{x&Şl;ݗͶȖ#ɳ3Y%ۭn-Ldb*H9TtyϦv}-JwB޿ٌH~kg`p J*1@BRNX&"gC%<`SQsU m|v:?+Zɹe&pfXc5H(d NfWX 4iv?e/bY^޺WƬtr:9iU ,^ ]д!Cg)m׫-OCn#~A4ܒsc7AysvzM*ZNHP*mH0c!ߍ>G,1c{DYewyD)iiʮSrJY[վNqVUk@JۚykP1 NY[ B8P:T<mS|P\;:mA~M% G;Ңr8&zx'DzE(n(D7 icV" =ZPa&Kr(ǁ7)-6 E- 2BJTufȍBBXX0\TQPe_H~V%B9Yޓ^sT ~v@ ֹ(kFcFKMqQ@@s,RX<,FX'ћ`ݕo^@Uְ}䊙,3$S djD@SkSH$GPͱ@ |0Rs`AOm>B476cY,DG|vXҖ*_' d3 MrQR EHuUj .풤zΗQ8_#\6.Gt}NwJ_#]ll`pCM?2e A܎^aRn3{; YwT ofLZ%0u ⹏^a 5^1Pͬҕ!KTtФOap }A$D2AyvA#esX ƑXHy[:FevAsQj5lX`ѣT]e-[e`9^#<װ MlZ Dx2fe@$(AIq MdZɷn .0ݹzH!ԫ"h6-+f(T.g|:1 VbB4,|ׄyC+(FPwVoֺd.JG&1'@K.N4y_{K|-uM<7D9Pc\in3SiNgD nqb#2D؆Mb"px d9:|gxo&>!gS#) ,3jYwTX(<AiJI!صo0M+WAQƎ7R\۷6?JW;p>A@0h33Xߗ "bޟ Rzdx$nmg4Xa %5 8? eI?/mw_v_g'ucg2>BWWcɨ9n& ,C `6ҙgvFmգBؑkf25VwYNi&@ FUfNrL"Ý\nr2,9yZTf89)A*)7R,%PyITUyf CEn׿~H(;_/0jhDk~,fD1* A櫐9(O*@0 OeSYA Žiثi:)P{X~I^ ZBHaY$,QߍB@^VF2zYK v*yOɸNZ0v"SMSS5ZBo2f?`J8j L"B2# #9( 7h3buWtakBp2 MCI-fFnUF"i<͈x`¨Aiu$-|͹ a"!asTqy״U) +3rQ'2V쵨-jC$H+1BJHI!b hHfw~}0"Zko1ǔuvpC5T YŨPޚʻ-6HI!* ܞR9@ Z?k1'0`z`[WC!:4VZ}=Nv{6Hu΃8m<9ZJrO뻕';Gq7:ߦÀMU 3!Aa^=)>B@l? U^Ns_`hef3ZfJ帽ƽ1/ QX:VSE9UNWѺ6@!fq6g\ċE=%8SIwQ X`T$eÃxv R@cPhfU m h;vNuRjoNZ;s7ja+Gu*2Ύw0FZڼrX/kNJho7Ӫ">I*Q}8~kz>!v9HɉqzD ۠"i簚R%RoQMVTj頡Z[X5 )( I]_E݀t*77ћʾ1{lG("~; MD33E&SN rJR;s$9qʶ#G # 0 ۽^tvSXWW!E1ujh]~,gol%vB/R9=N3jI_:b;V#'|"X t_`,]L=μu[ ~[ 2yFθ) ^.KPH!_SԳnы{^S:ѱ0BJwA(?WK jSFղOtm2~Z61B!ZLq@Bͭ5 P yދ,W$32oNdu~ytgil? 7h"Uu͔wb%l[Շ>:]A؟{)]m:D/,F"q,.] E#d(}=R]Ha;$%'" q@a$Y.(lJ.|=l# .18)f/e(g{pz~bjN3Nqݺg7:B(%jhP'ZfRrȲ.P7 OFZbԦ ѕp]~xW KpUmfuQwv^zKR=N1.u:|,T:fd$w_Y%F;\y=1%߃kbfHv|6[Əzsºwu[> sRWT%td07' -Wwߒ$J.Uԅ\MBÝ&`7MDpFGgtf4N6~qq0_v>ɿ\=* O|S¤dKsNUXcqOuNm<'7ٙ$l Od#tTsT;0t +)_!0H~`q`Gĉ"]#j ~jug*,}T3G z 4  T BX8Ecu #g02ӦR+2\@^UKHc(.Y!VJI?\bj9j'z5`WS=h6-K~pM*&<'=?Ns}T|vD݌a x)PРD…q=Kr%uICrq}6-Q˸7O09Ɍ^ ,D Dvx+*$3R$*qY`,Oo) 2]y.ШY/>tչfj~Oٵwͮkv]k~A:̒L"E (U͈ɩȁkg$UȂr R\Jk(X`r~̇(њ#! WYs).o6$n|kt,rtwgBa$S;[yIKcy*cK^81Ɩn~TE˝g[QНo~L0OyCsx絝pSuFNGia7^~LvN'\=ӗp~0ٟJd|LyN9bG\.աeb;+kXU5Maף753s5gRs+-RxSPdHte[9<<$I6tdĹ6(*7m0,2͍6l%߇BKcB`m'M6)h0@[\JgBFgD2J9fJer:A:ͩiٻm,WTagSex?TӻdٙJf2f\ %$woWW ezYênK${pqq}_~l:tv*Nzb AS =y V&hs;|\i"6 BÓ (# ('!5"$3K4Ҏ^+T"jaFP z1(dD88\]lEC ?),V_vl/:Ǵ~s?&g4e'yW%2KUvB;Geb,k– HNtܕ!I 6IW[sL̠Z%j׏r|>JvʁN=KAezkNīudXGvl)2eXFmYjbQPwr|Hv'ctQBB?o+0龍G(eb73Δy9R'_ ip8M 4t~_!C'V~o,ҧɡ,$q1CEGDoI!pgg3`,L3qbd<ɺyK-7>_+X#_k/zaM>ѯ1QaVP[=C^d%O_"_Jq2D]zIҸSYH!Z0A5ۙrTz$u ȩ9[5{CPcz*}skE*OC\EbU!3LLZ>/FCfɭ"~[`]W ytR{dw& ӑ^*fPmJ93+ΰ<\y)6ŬȭWR|e 2?hLJ!!Weoo3o}/~`O飛{Cxvm/%~ w$V#J㭰`C;j ˴`h+hFӑtC gHilTD2+c03,Vq Nc}08R^Pνp@obp~c?@Uo6%iY鿿?,ͻ7b 07܊،5]֔+P\b oScޅboLoR?۔o~ˉ_)|t1\O1l~ fs]P;ߧjnɞ| M?|9ؼ͝ߧ:F_E#(2ѻw XFF6'6h؁CVr7QeYvX N"3$SEmA]ޯ7WADɜ_2yNЯcBZc9NN(hzFak ?dʑrh6RG/KQ#ʌYn b\@`J`ܵQ,u#ϥ s>0y'aHn6hǮIϖZȊxT;Aa-`~kw~k~99#N ŧXNQ?Zc*]K󾵬&(X߯W8bṉ2 Ҟ`ܖooe%fQl7K.lax>!`/,+TM ˴ȟަlל.+E5ߌ?w8(:Sh~x?1 }>•z7KaeCX8"{M"[x+]CK:6=T"%o$'և-"- 5Bsul[ *˄3H%}ᬶHܸkIEq1F:egH2QAVK1iax x &dI|;“Kvguj`2 7n[9:J,A͈ DϦig /eZU*.6s:#TI0}U"V0h \2p`{m&O郀FPa,j%עNh\%ZKdL"xm`^!vrb'Iu Mln{&]wyޤ.^x.M-\ n um[>C>~%O?t+e/mg:.jJך^Ah׶nGXVuڣQ8DxZJ7&(<.C(6R +BMMbiSFsxP/>sV`^\-yZci7}E H>W}ߞFG+t Hu@ ()ݱBK/}wz WƳeu5!8_ ǔL -ui՞k-1W{Rj혃*q>jm=cQlA;%ֱ g5H".]˜#iDc61 q+㨏bqخN#ݪ&NV=ά`?I ?Eguk̒ѯ*ICҺä*zZJf.:"@Jds:&ʼns*S: @ ӕI?D.nf 'Hr~OxjUzu34NSI£x7}CzX]"c܃NrF01޹"Www!(w|9uwqb\lN}-]}˻́pG0/X`׿cN xPAH{PE8p`ujꙚ ?ABKe,f}؎aJ4 B}?K:tN0N2@q0HbQ>,1 G!2222:R3DhO 6!+&#޾BZ rڒ~  lbnkΕg&NTb2㡉:{ 4;xZr2qXJQ${fDZVK) xkFI-~ Q0/W< t Byu"-E۾hK/@  fݡq.ag_zBr/e7]#I*[H7tPnH t ~ *dK߬X6JxPV4G/ Cf޻m2LWթAzY\MÉ"Lc¤be9aM?|Rq\@+2g2YV5́QR$ aXaW'2.ɺptZ8kXa2Eu/r = c繸1AblKUOaޚϼEuHƛ`nhif.4jLƙ2̊"8i:L)<ąf9Unш7>vX?X+[V)Ъ2q $( YZ;QYfA"QIgpe-򟋀TKLP˓>]':WZ_Rv^(W:E+kl5N8$mO7EwEh.LupK}9"Dnϻ4O3Edg^Ս{XQEW㞺վ{BϑV5C:e".z.x 5tP t>Hԇ7߫k80\: 54Cu܁\+O? .to!F]j/<~3s 54#e8+t埨] BbPקtçuiZ Ԫ:Z\[Z9ȐbV)ZoN-h[mu*ꀪ}phujGbiDktcڷV+DԈc{0kE|\+jKV;jKzWog׆ 2Q4FQZ8DkQ n-<#f)"oHew:2c;yŸ 01(Ҫvcewg>.$s/E\P{[4fmMK&LT\ I'Nrm>UbCx$Ҙ oghqYfKP0k{Y\@S!q񮥫]\Tr\*G"5p+“fՅ}q~G ak7>9#1=Sf _>D{")IlFB_b icAc{I[pcpD#>JG0U gC+d>^"fRͥQZ;ATr_~W>9 TUg#9{+{[EnKh>ИeV PkfT~٠2aC-D! XDc`L-:Kq䠭A<̗bztE89"\._/!{+`dNH"~ZrK2 m) K[X[⩆!hT cy4\aӕG0c:Aʙ脴0 *VrM̑`?qhQ̘1#TNHbƑzi+ǰsB"0@3F'cG.TdbN,,Rےq[$P.xX A!V*B`4ٻ޶W}V/Em$Ml")6%1$KgHJP HlR"3̙s3J0 1@)t(Aa8$ad-fu&Ij#ߣDV$J`A55k%nDk90}PoN$N(ZwZz8ٝHA) wѢPh#ezM\ dDڟuW1+U0Bֹq(w t$2Q3կ9-JD &U¸Fu|TIVR M5\Y]qQ ,#/k_q5Wd4T#̕;|cW:۱D*QD}39 I#p7,T' ƄsA}$uc,Ʇy/{e5x& c-R6|w]/fU#|n\}rO|2Ihe9D`I /܍#r^V \"i MBύ{f(͉B N|iI{g q JGa0i[R`FuBd̞;+fIvXanz> %a؍mRLƀRflZbl/듭ㅔ [=KӑRFh2bQwv?fkjӊU]iQ:^e3^O-( fAniuʆ^Oe/+-Y: .$B @袜,Bu%Tm[b`:FIA@z7X٪/f~ c-a6:Q_WA}Ǝ:`aBEOe~RZ Y /֏RX ޣ O$CiF=EjF^tS  NU2 pp5:I.j?-sy :_)O Fgyv,~a,>$̚^q>b d5V M_?^ٞʫ^#%Wm^c-6drLo\[1ni>mSˋ 7!W!N.] RؠF˗WeU@sC9ZECֈzK)0VZ70†sbڕoe'77iD0IٚW} <ݹ6kW1CÎBcp8:ݜ"DVS3I~IFA,iɍ5c!ev9Rc#]1̈́09@;?=:yPMZL-QIB Nugm>8c]0yq*OX*W 3է>=Ұp"9"ܚ.$ZBAa0ؿ ݋b ~x͡s!/޾x|Vm4ʞ4^&QuXE ~:p"{cki PB"/)I!h,tmƁ<ǟ)N&Gwci?ǺÎ˴4l4N ,& -5 7?oWrS(Q1/NL} }#*߽Mt)5ex1D}GY),:>JxS{>1NM5#o6%vD)Oٵ3Je%)&A8mZQv9Сowֳ߽ %VoItN,Y[2CR h! 3*T#׀HéP6K67cb9Xq[ RI a9N5J̡w qYaھTNDPhɃܟ:,@$z{IVU5 !lȋQ{NZjoWӁ!CdxـP7zp 9h5 F(gIw ݝJGJH <6ZwB ӱ% ntanI{R|f 74@ڔ,v}ޤt ИOACVx {I 1F.sښ2&g6^N Q(ڻ7^}oTT Jd!ؖa&H+~goE-TFiq5K5%E:%iۮP/.EB]ghi /sYZ)L-+jB!TKϩlBbAO{$ݸAndUVf)Ynafi鞄E4Im!Poyڗ1Dl8dT(U9SNk{kw(\;cl&C9K8Q 2JWՑbB[Tu'"-sOh" 1H'V*n_1e53Lw|a R!VG/pn$UT-GnsfFG//G6XX]KZ ٨ح60`G1 `^k{ʓՉqfn\YzAS^ {G`r_aі|KSt;StVd՞h5'#Fl>PqHVN%,@-44:X2;v}!qdriѦ) h 6 WMFޅIAK_h j7]bڨIBR #vF7],rMKvf4P`7Yn:mAYojɣJe\heI: clJfVW2aUh˛`,+wBIz ^5nXV&kfnh-"Dd ̬ARYՑ@I'ҍdPs9)\ﲽn35Il=VthNm=M~Hql5)"DžMQEW@Ѳl#t`wܢ1jEg^*bJu (k mDz@աNkte)E[4BSyߺZ ]f˹5/87asXӵu`s Y/]jWL 6z{sWPwZ3HY奛7b%N| "0)x$2gIvh9DRuPakƌb; "ձ3@9Jl",@Zo;ܥ[?&  v޲$lt@髟XDtԺtcu|H0"r!#!gamϔ~55%񕄎8N_*U5rژX!7iNPXs}-dX.4sD\}9Z{z/}8ݶ9?֭b!ŝm3{j[ !D0cJRAZs5JH5CGQd]KObi%؈J9A@X^.`]"_b ~WC=|{MNj93O :71:[pוOLYSRRqC]Q#9 tiY9*GDxhG9tNQo>}ʏ[0@jhD#:gY@pEǵw|V$L~ vˆJnQ%ha:א&ӍC/g+lCjN6!1, b8sox}Ƶ_V7_ \дJsqT]P-F7"j{ #.]1"ļxn0) H]iEggK[ pwgP% Ѫ u7(\CA(Cޅ<7[mE'(lV$0y̏EBRi<à9`LzB*6=3 %[jQ9g[]w\Wٷjm(H)ifFX9$zX ]R01,bI/ cL;4i͂̂6 JOP*-Yж6+0Sei*<&)OrEO!gp/AK6/̉.&Fq>l2u$hT?dс?Ǥ>FU&(Bq-y8O7b N@-֖뭲BҢ'eۯz %Pݿ6KڷPXRf2Cz =ܼ >ZI*a'oi}Vو “:i~ YVK׽{F.TqCe|[xɦjgv^V <u3^fQ&Q#8=IY3&IE6yڮII: cmSVTz,EUk抠byAjOէS1H;'뭲Xy=󜆷?x4<9yV9!{򈮹'U7:oI'gn=U 33 '98Es݅/q8/_D?_)aXR1mODdC|A!!s8^j %Vqv)[Sh_s s YvNKX\ķs:kEo\;q"e~k8\;Hه|Scǹx69'}Eeq!JXn99hoް1* ζ0&/69(:) @><9w/6bV1uh;x?3uX.01C%.6NfԴؿ 4Gd/PCIl:4 NQ&Pˉ |z?:vYҘs:/`fy)iY yV&__nФ/ °p1Hoպt;?P TS&N#Gsӌqn: w}A > 8KeCeԛ7 6 {E>Z:@BU _$6OScQ7A` :s +Y&ߚSA1Fj䍁k fSv|9FfB=K[=E|lNWOD˰hds\H,13!(6'?>?\%kw?ß\ߟ#_}zW ;@ˡK(Kx<ju~`9 "ȓ%P~# !e }'؝'&Nt#1cSH]n@\PJPa@x1PCƾ)jۓȵL]GjA0z|EN~̀fC/>\.?I٨`1ض?:+ /u(?1ZW+6MD`1|q;az p`CٻqdW~Y,f~iIb`./;!T/;Y俟$۲nSvz3E*Xoםsr&剙/W1Ƶ"7kޖEDz|S/W?$v-miw~]9Ɵ~ 0T\@j)7qif>%:kSX{ثZ}I 0N˗^q]5~lpQoQJK)/('W‼dr?_yy1GwEO>|^IRvd4{Pl>%wrüjOJ%$W_v'U -jQS;"^bB²=f5G*Qe*u_x1ᢋ͊UΈvVSwMG>6*[t<}RB)Jp̄Z0'1Ǐ )y=%QC8x cR;‰Zs":F=*P熓JHGە9m&eYS)eBvu`|=2|nGcg;~NåCYob QnGɮG3pP6i Z?S+/Ȑ܈G,9RZ}m†q5#f%dB I{b SZ958?5aؕj_xtq$Et?n1nd!+[vrSlpǦoA ŋ z/p$͡[?zoմTyۡ`jCңҒZ:?֔Z\aqhS_b~QyŔqy||I\YB һWOw=`k_vJ$6ľi]Qrp?Nu8x3;{|_G_ZA%tRo]+OR?!_-b{~ݢ7_w$ :!*%Yv$OpA@OCV)Ղ`]YMa6L-]+{Da;wU:eBÇГs%0-M܁BSR{{U=I ld@qעtQ_oմjSVJ*&_|G9rOХy ("V#>T!ťWZ0A.aM[MRzJQ;oݢeܒExR\#"CpC,z;`s^}rR?owſX5ixaȼ-%,V}Q-:e2覌(H. ŋsZؙ< "P L$,^]x}8Jtfcǵ[NS3v^}zKzk ߏno7јS PB%$Ihb!N ,31)!Ju )P_IxUBɵyDof³|Ѐr+Jn|"))0,;9ީ ݁&k:ׇ/QPCnH%Baԧ_^gSr{bڶ#3cɍȤaXDc*W.BJd Kja(TvK9R%<;ŶaW1N1)O3Hm B Ֆ9V!& Sk38,`iu0[nYol} M8,PcFС8_$0b!TW'fIP C ʮ.ߊӗ!D6BrQm.)WZv&s'rOT>9_*PUG-NGՐT"<Ւ;N=ڝRp21g ,$6YmL1CT;uh6XBE-Y< |/_ FjO //^r//2 V-%q3_{5_ 0Q˄TLx05? w o6n ; &Oqw3q(_qJ:k֞0ݗ:h(> J8#%0pDe)Ya{{c3ֈ[ Zb3S)wݍ//Wb(.y4~ /d 5$qN)eÉHLT3${X`f- 7UDMH!HHVvT90u[ wJGȕg wV;aǡ g=!lx#Lb0L;[H vj ťly괃VfuSl4HҞf[$T.EpXt%֐aR[CÄ́Yw-f4| j`KQ`KhN'˗j>7ASS^-gan!Uĉܑ7w1KŁY5aR蔰,EN!#Su8[Py3oO=MxQr{?W>*y twFrZA:Z%:?[U큤UOg**zpSX;pԵq> նy/XP(ŤI8\t)G~/m"2Dg 0]Mi G ّE!qTy# "_gm$JN%Q'yoD>u`#a|T7K U:\`VO>9HS>M%8F QVsgn;u9nԻSOԫ^ݍfTĭS C9'> kSdVԌd2sIhoOtndIvt9W3Ķ*;ѝ-GJr5Lk!j9(Ӛ1Xaa bCYN-R0)5;( F2O3gtFSˮ2ŭMPV21&gȅL Ռbvct4F(u8ih t3%,g>;!cp1Mwf<#؜$A =3_+a $&M5x1å<הPijhju8+GAsCcxK6* U&te}Y\~mTenޗ.V~ w^Ĕz~M_-_9^r~M qr{uA@HR7Cg///@ WA~<;(~q:_,+^fVqH׿L`Ab):K6xP(Ltr{aVtV|nml~u3BX%S$ݕ;'b(Y V!lU 3ϫ2dA":d+C9I ݆ߋֽ#{!JR*5ѣ0nl*fq_>TP,irG3IΘwoE]҈ !\WG/[{EWKdM$CZ-%봴Jn{R#VK?$Q- n,zs" G0Y z.YfX(Q:Ks?- @$ ,6sU0"EtP_z=奙zY>{od)-\4Ϥd֩ w0(<#CkdM-GFj\*$ڊெ[~U3v^xC}?4aybel'fv_>*28;Хf݉h sQJ t4#TRsTސk'|wTތL6!zI'9& (6 2 .iVRLf| LzVj2,Ź0\sTmDGM`s >R ¦LYJ)Qn.e6iƘ4Tg)oL\&3D!J5q`z+ΓwSO}Ft6аZRl sd XHP F+,gJ YnSብ4R#L $18B`Hqet Z(:s;dr?-XLg55e=^Vb5aALKSԆTh$͏PYf8A)ƙ'pX?Ȕ;X&r>2Hy֞Ĩ1 >!1RңĄU^^ OKa{$(sk0'IНҍlεVNf/ +I4r OMӄ1f0h`=ǔaH5y-Wڲ&M S,K3C%2j~`E!u+$k5;=JDPw&`% )|7$MR A3<~qJJA!4;XeH (:!P2Ji~JqҞhY[hS\ot+=SD%i;e z WpTF8t?6v71s(03֏@/nWE|n @?=1LS "4hbeK sO? ԕUB8yۄpZN`Olݻ|w# 1 @lz,kSYʇ<7/0x{ W@<2^#ϾG%"G e7?)JE{YqG[9rB3K7:l]X!MzÇ,91Zo'@hŃU )a|0 J4ɸې3T2|D~]iR8#= s)VQY)u!w iR{lլ9?l" %,'\f+_E2 . lRY4WDI1֚g1 5Q](-FK&ңs?>9zswBjqYĢM93} (g84Pl&ܚ@ѩ`lշD˯QԂA"",XxN ̪"ÂH%`Q*CGѦ.}g"?*`]u:Fh!ekp!RkRa_5<8e#fQ-Zd!ۆ5yt9c268zn (m0X[0e2z`G08X zv*% j,i ovZ,J[i{0)̀龜QH}Am뙿=:PA?MoH1S4=yRU[`я<6}S~|{4j_`x^?m\nڧ V̭ "ͷ ce~7kC~7(V)%Zp`T)e +e%xc$clK/! S;̔qTX&AZl߀!Vj)mDq\{x?ϽLj"hwRnJ,G:p`$8Hhj_0)ؕ0)h \ēMuU_${{էlRoޭ+g|uCJ˺j;v "4~c6ӕR#L7e:}ɂ~k՞( ^fLF-_eN <PW#bAAy u/m8+p)I=uJ=J2YJ[߈IH{~ΉB뙋LjOE<"F<|lF,mq%e)쉴:=A#D59t@5;NXhPEH03Ѷ֌^552Y9*L"7zF=y #HV+b@Rط-YqmvF)5TJ<b2 ^qX쭖*1P V.Ѷ {ӢA2A!((TDA`y@gωk3k70-)*ځ"z 2 1y/3B5XT$v꿾{[7٥?+!T#]] s Z2nq6gux=[,.NoO~1ysHMI=9lwa=3⼸5*HRr: I=u? $S)iXr*;]p)~tc)Ӱ}fߊ2AQfETTim#p4FY]/b^~o-wReK ~$Qr 9$z::  6F#N.`=Iҭհ\qR:/$ _BJg'l$&Zn$%86 YhYGd!dtt)Ec%0~')02U(gT#J)FV3T/W8i ~:OzIc(R; y^%SzςfQB2{^߯`RW_>EY\^.m1ѫyĭmPz6$s1Ew`2a_ &7B%^ƒ+%(Y81w(,G\ĕuhdt]B eg.Ƴ_}g 1l˔-W׺sJX-|fo!0GzCC9Yzw/S)rzKd !.e>Xw;O->/ݗ~j9=txEP0P4]onָMo\w90XjoB|b,~5:*&:+"JWrX^v2or۝%PQ=+S!e 6h4 m)& aK$#شRh=Cǵ``X( 5$ét$VH,H㺒CAp46˃9` BGd"DcMj~~F iJ8 H,Hm79TLakj!%Dkڪ{:}4,T|!b B_TM-[x%{#za[9{ ҁ]`O\(fLؠ2,R(4gHD؈1NϯT2"T_ ի,U4Y0%UWQ62fLte^אWޕy?8=kAI6Q=_ `R tEM7 'h:N،|9St~E'jsE74 ÎI96g',&\4J90W8#lDA5cܲ\I]bqv\KD.PZI$ɴ;YĩRLD9xNˮ֔6Dw Kg_Rќ})UgyNnF SR Fw74y櫢P|#0+`XDy9ѷ#j ;8Msͷ c5L`u4;3w[1{;~,Qoa)l,rUZVUw@D N:aTroKJŞa Px@SJ#f+H`Ҁ|'N"45;̰$ SPФQ( ) _ݔn-4ow ?́#رZ{I@-~OMoIuOsf>JyͿ\ +3__.:a zyq>;+7Ms*ʹiLwLio;j=V>RUL][]<` d7.NU厧 n9j5w5 wBZw`>/kjaXan.Dy;49)! q4jӷV]O`xt:?jƘ0ӝ!|c0w_T_ D@:,7 nCjnόnk..ΣndzϮO65lgsv -[QT91ީZ2 NcLS&X [R.n-/g*n=8!1ESSkyhnmᩚ'wբkK'&#W$}lp>IeL-+?{WƑ ;R?̍3HVcTHJ3դH[ % ljVZ9s&S\0#geotSyg+$ա;&jòrCo!`2,nمG29(زhY ^( Ngm¡TM4GVattTYyN K"b)lXy)^3td)<  *QuoqRa=0ylꐄY9^ďzz֌hgpvHHāDZH 4),MSSޙ2RĞĢ^pR|M'ΰ;?~=qUq WRh9'٢{=z9'+nu )L9 D0FxsNPۥ5N A4ཧKd'@ zYhn>0 w:T*#tZxukk1˜`l {I'T`(A D_G-X*Т,u,RK/cZ2g%Dlo90R򕞐ݙ# @Qjo̟ AkdgdUu8'Tkʙ#\PsCtrHb#1sh^^CkH.pa9v&wzO!\n;?gB/w`)9F;QCSJS{G\|=!ב;6I0KHc` ]C*rI5\Bb(gdm̦6|s6(M>=ϋBtϞ$3wɞ5kWgF>rHRR,]H`(ӽ9`G}ոJsrg.ORF n)O)E/- yǍcx* "%'ޞو௧`Z`a6Md k 1d彏J,-B[ yv)U\Z%wuBDe!z7*!yK@w0 bA G#T=\:\1.HM'BT2ʑvx0RcE.޼4IHJWHr7Dom^_O6<7@H)]/)<,|Ĥ~ 58c;Dj $2(g;# Rۼ{MT ]Jc1T.%ܦlԾ󟊼AMSR_;8!0C?M\]v?mVm4U; 5V2Ny{C0CE6>`I~2.2}{C,ɤUONo}X=c4 ~7n2CU8,N9ɰno~,Vx.ߜjw\T`;Qߗk7+ێ9% sM: `0޼Wy _Zr$HbU 3($RyYEOy:TC`DKfQIL0Agj'L4Z< ʰ`3O l _bLaͨQisNPilq X҆TWLj# =!"ESzVBcfL R 5 [w#!Su3r&: P|W#4iisҼ/`kAQ|4BG(%DZ'bRGdcqA~B'jA@09|['XEy/{6M׾$A ͢`+JdBjtL{θT0'ғl̷WDK*vR,H zftϵ]pU[_(瀪D{=[t:F@5eua!׺tzߖ׫^՜G6Y,ռHϗ̖Owh]R=.y ,~9pZ"oҩéӵ687Ai7fl|o1IH8gHOf}nt-?}BC 띐t\F?r̷v<"Tde.yUeR$I`@&\$֒pLϝ !$͵.M~,䆋˻v;fZpd*"bzp\`jVi?L' \Gw{0 1qB LbN@PH [Da ǴL( q_6lq!,bNR=_ÁQ#$YrBڢr)\񍱘64e#]6ƥܿ-ރBR_H$FB2-3YH$S2M="F&XE׍uqEޗ{ AI F(AQ,xD10INQ ěQ M%=vLh28G"!j Q(,Nq`Sb^ޏ3W!_cb=A hF#g'jSf0qQ+3}a1#c k< Qϩ Y8f,h 5'3/ނ 1(e<-9#2E6_(^䣆:όC/ۤcn׳,l_(UM/]1ʟ{ z; V-[(RUjqlXKFfal*8.:85Sy_bۛ"G2\EMLv"K}@!hXjI)T+OZ9A?kT`#u۱,-IkqUh*zJQ2-dh1H-GTH4xe&1"/_+%TSϟ*]jȭvyz~g4^a@2hCYy(Z~]2TRlyT,3ް}iBE6ܶN|鬚WU-CD.sp +8rFO<K9f\fv˾ڟ C_C 91 A {@@hoF׮mAvZhޝ][!MzIu@ES*$o,+`A@8/(+?^8~!Z~p,zeyaˎ,+]9[PޣaJ'ކ"}X>c9p= E} ~ޫH;XXU0E# g(pya%Q48a9rwuO{>__-JiזZuO{~ilj>mI{|-ɣ9Ǎp0d[e0΁5#^)[+׵>ḸtbYZZ\,<ٸXbrq1X%'KO6.l\,:ٸX2+O6.tb!N7.tbyRӍKO6.Vl\+ʨ hfP HA2,2m:{*^+*N6.|qbdbŊӍ+ys/k$ɞ?kQ)*WH!7z , z:)((X6M'6ϓdt|{ M iurSP]~:2z\˧Ep{otO5T]ǿ\2m<Y #5إDM}5/n2ųB57>f)0k?9fly>L ΅7m;E-_MpKlnod%|d[o?0MCٷ _Yt{Fhv;ο7#V;Yr^l=ֹfl)XдnR=Φu#,0m-UhcUjpc J\Z=^\ZSr8/aѰ>bH&W NmDdK?,zңyriۈf؏F?f}G{0{HQzw.&2|[˱Z'>~z~@ߋ~RY+~n҆;=AIJ;3KƽH[wڰH^{Ԍ-ي@܌,|>u!.̦ ƪƅ=8jv<9S9#ba/ﶟ{KoNl{[cGǨ"wUnғJs')noxG Ǭϯ>THj]T˔Uc HσE]/8ju{W (ˆٻ6n,WX|rWR2ēZ2RqKBR[MIlK"Tsp.>K h>ɶQz ;jp]O'˺~~C7whG֝mğ!ѵ]\+ݸ뱻!4Nr^[Ro~hÝSк&܏܉P@hnXϛ/ucF6zvwo hNrkDnx-?-J[SNĀ9NEF,eH+֝hu}'䩫}wl6M-.qrB,^qԢTpւA`{|T bdǽ 9etp'a3]uH'U|V3D6V"6! A-3Rkk8[I*Кw .jS Zv5"+-m#e`ز6anH[wv=xO1g-ѱ4@&QݼJvmdS92)I-];^[ U_DͳG;rk=_ v1h>d~[]q tT,?ٯL?t2jF`5IG2$ ni(ECj=v`x0:mDi=NdP썐쇢RF!=T,k x=5_( {(+coNv8-bQXϥoeB k=do(i? -unp;  Q^"jtLHYe@]2\CR7膸AN:u:O9:QQ@k[}CDz> ;û}aJ;AfDbQm+>k L;/iT³zԔ짊0hT0 QSuTC m*ضB)2|"=-2Ci+)5/ @CZg`'MnP6lIebTFp(yU)HE!p%T9TQZ>=FMR `(h"||m^xbHX"KkZ}O` VKqƁ~.?]mJ<tI͠d01$rj<%iSg`{Jj.YOJ `B/F3~^}vfSrn [bU@?wCQB Tv'Ȧv6pQPOAJj>0(}Öi@U WFkS1k"[ņKBBVq떝RW~A#ʱ9C 12'*r$UpfisI@\Pf2b֔M+#ULp%,-H$,Uj䰃;W4 Zm0% C"LZS8MP,ZM>k0g2a+{rGd1FM'LQƴ/G5^BlEZjJ"-Y0TZ ơ0N_9s Qf}1k쬘3F~Tg#-'EB)#&֌Gv>;__ޘ ׳ 3Ζ#7ރmU,'7yG|udz v̋)nfv~,q)=Lx~z?SS\7;Z=/~̑wk7k"veoޙfǿD/?kYh9eZꑐ.Q2%+Z75#;ya60ozpDE0U2D$PZX BD'vC[nHbѪ5PVpҎז֍;nNXpԚu^huCB^FCoa8(upкb":cbݺk9$M[腆Z:$䅋hLU GNxӺ1`b11nݵ.8mw^huCB^FT-[7ZX BD'vC[wRK-z֭ y"#SlZ7LI[,!;!֭S4z֭ y"#S^}ڴnt[[$ActYXݻ.4ޭ y"#S?pӺIڋX .5"8ޭ|Qϩݻ.4ޭ y"!S oX7p/ #8(#ürx{'g !/\DdJ*<5מk/SU:́arStF;̗ŀ0eSl@de7gу&\/'s0-䳯r\(k^>ص4C2MmuCa Muj26} ݜ5j~7YSC{UDrg(0yש >9yת )e(SVMUO]ȩd4SVM(&F)ƜbujC /ʝS9ŘRuO2B)Ɯb̵j 1cN1:5)ccN1Z5Ὃ sS9Řb\Pb)\&H,ccN1:5A`Dc.cN1Z50Ts1ש 0)Ɯb5kdx1fIL1cUhe]Ƙ,ŘSVM.L?Fs1ר @03cN1:5R cBb)\&x"|p1f*%Ju)\&H |OR^>~_g_y<-fsmߎdr_- g7W+$ԥ\ays?h`$p5XlQ?˟'kĕ/ e觩oG7j~?<*f|0 0e3gNf2 P R$IʎVZzAf 3+#7w//g[3a=d6=ju?i>rY1.^Ӵ6Pެ?:Y}LXdׯ^G>M']Ҿ~̎hXsW d[ a˔ dcV|2լ~y_L'dbN5*m+s9Y,}Τr쯯#wn/1bCGتwu.udR5@8>leqқe6fkIzai=~ПJ~;]խ(Py[~-&저 Оx'pIbeP.XdBcLLϘi(f P1d46a}ĨWLgɄ4Z@&+2|FW e{zXvщ݂]*nA&c5q/Vb*sE.IƧE5h=՗C035,/Q&9*QWy&/CB+n$6 Ar@9RL1 3rt83hَ11N%fClӆ426|w@QsG%Y^y:`^JFJGzR0+eHtIa:oTPR'yI !۬%){q)'ѢM?Og_w"*/+=_fFX- O+9F grOCփ`9uZ2 $HrgP?Mgw~)2z-Bd::K}TJɑ9z=o}]{31뚝[I-kdHkxRXq'jF#ޭ,"*[ʊYb%q?Oϊ8Z|\]OkO6$ mU?VEOpՓotrC%U:~e\٭kxPX )XŔRK49P(nT*)% !޺0pXn/nM``rIzz$=v{sWI^CWyMN._H<A`$`+QH(P:ȅ߶Hx}Q9n?n8^,hl9{J1~辙ڰ$=J~Nax" +b~Z PwՒFO/=F*3(Έ?mga`e*lwk\g7+c- -1VKu!;F5 ]XU4W竟 $n82,6Lpͳ\H#o8x$?z{; GcG,!Y oC)!޺%Lplb! kD%#JIac#)&gBPAYTovB"ֱ*ͯBh,k?оlsz R{4#Rv:@S!68u,&vbYLBiqXJ‡X ăT@ٷ}ꎅȋw 5Am4Ú#QA,!˅VJCD8fIK&P( #F"B۲IK7U}BmMw ί[$GU,{/K=(z%: =Djs(Hv(H$4h4}HVsicϗ%]O:PQFAT5i%4=!YcH eHFfHsXcBA+9-$+]b(/6$EZ{8aΰ0=?b#.t(z~(z\-z^Z.Hj u 15痿ZGUZV<#ڜ,ढ1$/L( p4TήlڋXO6Z6'|?oJtt|SHb4Րq%Nb 60 }2 jPRU6*,![JaI I yR.Z2`\$N]"$0ٝF!{N te1ئbP {}F 7F٧8GxJZO13V^EIԤO% 2'E ՖN_N%sܒ}KX u: 8fjyձw:ŠiLUGӳqwzj gyCx{p:NŃC70FˡHWIbuuJTa0:s bI(NzͷſE2HuntEǃob-`gדڪf;6ƺ5m'LqShuҬS+eA {7{Uy+ma)Yf͹|c|>g~u?gU +#haᕷ8|5faQs[0"]Ozvu=;9K ~80]y#ǎٟ>ũ4gY<.U]\8O]řr=/^LOY/.u䛆ۅ [˯٧T 2rHTG2 +5ղ} |Ŕ5s cjitŜO,)a~'sjE\l`X 9vKl}42Rgt&OS(H46FsQ+޺hgsP$#{S&H1!8f((ͻiwn>|Yu;>Y! V:9AMYք,e.1`[!'ǰ1<ƃ@ &D½Xr Cs'7uQRsAK eA l꜊,z @i|b98c?AirFa*RH?&j(jcJ%^2EAbZ"KPJT@fBVQ X3Վ{]uH6ꦙf*!i~,{ {~QHWL͉C/ [`/,x8$~5ׅîlIJ+R@X_ C\.aC^CRVzԽ -R# ۯ TVPۢeia:iEHVR^T=8;Rȱ=XzgZ3W=PoY]wf *vc [5jdqjD0И-ip&(MAl;DhG a 1m]C9\Кs;ٮyfS# lNΑ\ %DDfVe H;4βZa ZtyS'_U4I @$E5a+H]LH 읉?ƢdPGk54@n&H"LvuA&frṂ-–Cȼg:nALRbaધ#ChN7ÐCe"Sdʕ`ގHB-)@aO Bo=f]LRrcxQpF2]d1OMf0O9VBh 7uXGq҃ '5ohAK?)vu]MBjx_fj^2T҆?zs$MS?Y/Jپo{uS{?6G kgj@^(!_f}Qv`^lcm<$*6 tYb,PϮvA=?M"ڃdHPk4?hC-ٍ-&KaSۏ|`H~DNYAD@.//,ON͒ cwsX9^W'򱮖L./fgy-W?̽:mc ?Kbun\6?/pZZEbPQaK5\2N(p;8/$s Ix='>&8v^"ZbXX j0fytAALqavtI}|Yl۱jH`v#H==@> WPhs;فTOPIU Ѱx&m!t mZ6$8!i NkHp,և,vKR7QZ(vȲ۲1XZׇ^!ѠuOك9jkҷi NkuZj`w%J뺵6h6D#y"|C5ѐ&.i61[Ct`")'IU aG?RRP*/!y6+OO{t 9ۓFPmV'z"$FI1CYȲZi)Ká"V;:~zhcTrXe+\d& tQ!PE+N$2O{(Ӆ~m~AA0#c|Ps,EQJxLXƳTQzj> ћ}isLQ T?RK=h J~+56,ZT@ou]v+G{e\#ePܢ,ꬖg;1/M/|yPsc<86 po[ϸM2&{~R޵6r#EЗbO2|aYdLlVbK$O2b,Z[XY`z5b8)ErBlN T17ԻXt;ݙyyu0[Wzp)zeaxW"iw*nr4Y)=kP z&CzTړ sdO~N5&] Hء7WL Ӏ Ĭ*hc%(VS -SH:gWK-pU;Q Jn@POT6ڀ De0=r ZȐ"e'e$uHU*0JeI ŸKg&KH2/S)@˘̨Opb(6y`X}X' m#^_\nlkI=p#<<6.nv:pnnK7`>L9.y|2NihA))Wwr>ASy޽ \rHX`yP!4<<&VAa`+ThH$P^(!v7BAH?sjq5} 7(!C,R愡s!`юpjGK/emو0vU^kDH:gMSrvR<}zq|nNň\Iޚ;<>FjZlP^.=ߚT?dvx@C x0bY͌dWBa_x\Wq_;uY^ZRSJ*W8aSCSA̬1SA *!8oه\n$?3aS~o0mHFĨHʕ^ٻozz l6 Tt(?QӼ>(E!y0EѨ F)T) Q)!LB9ۢRruOEi!Nk&Vg̺v^£|![i{fhfl'o7Sd~o} h!t%oU BD 'n G]׊ib`ԪN3|d sef K5Љ6d*lӪHqB6`TQmQê-yRb1bӼ,ܣiz'կ&*wXJXvr]hQ#hGUs'{H= tvAX {'?ہ4Na@!X{d)!:.u_^R[~.hԕ!#+a{eXC` moVN(=pN_KUfS1u]~z)]w?^yɳղdei{ !ZR};+_S_EJ= xɼNwOM Ƈb]~̿~ٙfpƓEU)ZRong}oĎQP:&vl~4iiKʮ2?׊-`>K]Wzc|@\R q&Mv^ReQb֐r)U:T88"beb:cn)A\[wAĎQǻpo:%3ݚWnҁ6+7Y776pߛLo=_YU3m>\>'ӳQcoჲWcG0$Q;k#QDji!2N5u.I$N ,H !f SLOm,sO!嬝S ?wJ-$W,m[9 EXmPNuRkKRR–#`rZ0ac4;Af?UOd;=XDO#A*pdm!ݤQT~7I}y|=߰~=cD)IQTY)1RgR;+D2J2l2 1")v+:取yaɄlpjd^ԡTqB;, PPi'Uyi3zMMR2$Rh>B+jLj7vϐ?[^׸%1:`v},HE;x u l` K)ka EQW}O+\+\OGD9`}_=eUaD}Z´.7e=6 d h9cGpb bi jZ2evV0+0TmU Z'N@s+i 47D>uUfȉLk'WZLh}R"%8iJT?elDhKG;ú  9e aNO°ffE8w&&d 牖:J%6!2 j$Cc4 ~Y ,΃񅁬d@u(ʻkIz#PPw[|ht/}yWŸ u$ ձ);'nv|CGIMk# nY|sn'654 ?PtGWz=wj/ѳ#u޶*LBX'DcrZٝy) c 6[6!⸋aN07<_>$ Cظ%*Ȩ$ a+lf"298@`M(!Ba|cBԑ!+} e̤K'Th lFS ljPF53Mh<:ZycwNuQ4;Z,vݜ߽{rߗSi"_˷UNZY70G?,|wϨ 2(7+zzt6_Hh|7y-YLi׽H G1L~6ͿY?86dO_yЧaMaE}sQK\~sѸl&#, 1NRLeHPP)g8)T" JBDqj`<"dHB7IuC&<DLݿzuxOrg A31q-kEsu8˗W+n{\ V CI5B29uv;u|7O*RƁR5_6=te /Arw {?_5|7+GiE2F0F=Wwr ޼i6]vKk}p\(7u(to _J@O!RDO_ZmJ 9!n" !mS,WKX9;RAP' ūE&+CɫC7liWtӣI1<Y"n6tdfs٩+8}`J!JXsFUf4,U6YngnBHY7`IBK uY3xcOmy2BFZpSKYwJݦ7ɯktY9uh\OU-=<g-rE`:MК;'<^zz}SZ-5QV'_8B1rOֱ7 a0O8C7XK[?߁9(v0cεi Ʊ ;]!,֗ք1JXdMpٙfp#xw|y۷r7bXAvg;߂h6?*i]N8|I-VS;9+ӟ+_~e+^h=wҴ! y&eSc]7s?+Ӊw;`:3ݚWn16EǛޭs@+gzH_!2; !@/Y,b^f!iM&),RX%uT1̊asn' yS%znfj68O΢[NdGU02PDE2HSG"6,rB Lo͓oΖk&QYmĕ& JjD=}rX ")@9~ { XSKk#H*O 'mlAWq꧖*6ܦ9~aL/\lC4E+dٌy[ + Y5.רt| '}J e )XDHHz}CrVDs[fUHZj骺~O kkX 72Yn@?D8~/:PmU  q&㖋p /,f_q$+%L#WNYz3AR4c@Mw#'9q$JVQpNEe0ic A7yZE2ͺDԙiMndQc+cGƔX@YzY 8Hl {Omhm*Hco1yxhfr*fD:gQF T^L6Ffq Ő& f9lHPz֘F2)M&Z ;JZr0e+t+}Z/ԫqd +S7nee,-n[M[hm֤ʸ~Pݟm_ϱ|+o&Õ=o4QC7eB3Vo-΂yC Wj-3K߄7zǺ_uY>/}Q/x*_T5G떕J^/xt/4؇se燲CvV-ݢpffȮ(zϟ=)=ٲ1y;,|;J? 0N5+槻Q}T&𜇂 +" KqĤ80ErVsۛOփBs{ʺp:'FH筋|ɼ&#J yJ>01*U]׼yJß94r(t0UBx( 1bmgĀUDp탲Vͪl рB8S4P"ㅐ9 5!A3DH4L+xRr+G:hWi PJ~n oʭV}e٪o}Rn7h)bZpzv?A'R/ȓy2$0gD3QU$Rz@-eN2- 5utRzk7/r P]"^YF>l~W&[Fs҅?#y S:˄wŗon1Bq{昃8~R#)|EE+UBy'zp8&BpّDg\7ZB?b׮Vët^| _{sb=/LGR3BcBLN qCY sf ׏19mT#JDKcDVLjXPEjYq`mijZktY X`}I ɮ, wPܗfwREǽ@[!"XsnGvGClew迮h[!X~rQ.wh`5X2vlw(Kca@4|Qg8a .C0J h|K/ٌ:˼'9X Gu7 1[vUlDhJZ@B.E>pYVoP,- v Juo5؛ь̱(1J ɮec,Nu(<_@s" /XV"Xj$FslC¡bY@,>Ȼ D#(z|B%0\IMuw.*veoib?e{3KdeGѶ>Z[]kA`OsMBo>!fJbo7gX8pL"J_4gkFj_=&yLbu4~q/>.)9Gh049:w'ta,**W{0Y>gE!yu"RA6́U`e&ybyi3S]ǟ4璟J~iY9UR˧5E"w*@ @ʦEh:4hX|%Lm}?l? qn>fwѮǼ\F ߽`n_.+뺾9s*B-N DKu 9za-Rڵ2 -D;@De9U~H'hTBp&Md>v鯅}V4A])Le/j7+"Գqi*^YN HC lU::D";{oeptu֩ߨ9j:(U|b=Q(g\4hq*>^/F׾KB"Q8S>x3uGRhBApcŞ}(Hs{=^g3ZAMM!,)$f>FZav-ޙINw8p:0ǛJqڿ0婺h{-RmC"TU!"IWyufp'!1 XuQiWjps/:pZT Qޱj)~-@ n`N'AIpWb@d;b!%;觷8@!,+\>I]5GC ꀥ-+ATq~[L$3M2-U3z$2d.Aj0AF<#!! & 5+#N9fI6Qdɏgd.5)|7{'Y o7a *xwm3̩0CIKlJD#S)5Qk)+2j0#x#JjEHwܼ(I9k܄|)#tMJS̡$ +$Ax ċ*(<";fS`e (@'_O|E`E%~5-LnerK/[zYuKf$jBȃyNcŹOL-β]5r]N4SYlC߽xpCe+P{Mɹ1޼/@ FJ@LCY3\!K #ìf<|.)nLph慦0 <JjA5D;Ji%}^5 祁XuQ,Hl)Ir `H% o9#+j_ǣ,04`Vəp1nSێ "PpRQ+K Ģ'D7Cq# R_S1J6LzYܭzt}ė.ѺA}_,ף] `AZcȇ׏Y?NH?QӏR{~_'{\a x+a:ow&i-a!w=)hj拿]^ϯ;-Tm&'vxt0RվXd{86+\vϘJ[ XdP.YehreϭhPR#X/|'}hRbbSDLRp>Od>ucI!\ ֧۲cDZ  \_NHOvZ2, Lp)(,JxLQXXqPLlsP$0pk |d&IX0(=a q2= Qᢠk6,(L4TI 0ǩA)H)/xL^~i'BAֱHdv&]5SȈIEO4,%ͷLetsN(vi$ ɴ|"!Ib6HLT0ݜ& $;FMy@@juTVz Cprb'd7AISvakX- vBunӑ#R $0 |@BN)dWFϖ%*RooVO3m %Ͱ|43y<}Q^c 樫nM:k%ϫ~ f1*W{ NNWeӇi]Q{l(zU*GWR|tp'ӊo[d)2T;qwe ߖtؿ؇h \X|yr --{5T2kgg>bE Y\(VqO,'sŊ,dL7{UHrk/ߘͿƕ9w]y jJv %^o}ݿN+snF7!ʯBxC{E4PM]sH0(=ƒeBF5~F2 Zfb1#HЋZ7'1EB~H% c5&F]iw\4R4Pƪ~6NQ]j%HB-L?tfaײn= {O]6]>sr!Ȳmr z% #`[ 0O:z'Y.sܘGXzuƕ}^öQ<:vi/y{Q?: I85i.'Wg[[T<u3|d|l 6dbdM37(3?:7;רQ%'"DKg-sԶzk:8+^<7-|s$ZG1&qGB0 pt5sqYmjӳmdE |e p5JĞ[JAi6K0÷օ26yr랐$>>y&1P.1$mޅB'A/tfU3VĠǐx!teT+!(ChjAJBH\N LR; BKG*ټ i$aj#q.cZ& 0rS \Q8ؑLG hyժ. M+Q;e`]M.xu߽v0Xv(h1!v4VrA; gc7+>8_yZ)ɼ XD ̝vssK- ܼE)$+ѧRJ9HaUU廷z)Ձ撨f{Kȥ~E>Lz;{pZt9R04WW/"Aq廷Jr &TeCQBhI6uReyIؔ8j\T0*VPΙ6 HјA" &lcCsGT#rKj.zp[,TG=^W4F&!+Uh2Jdsi͍_fYVf 6w뤕[Η:uZ3\<@ȵb?IP&;mgm즠e7'z*TtINP[<: QKZ5_%-;e`e*SԷ?Nh,QyEOakU{~SFEph 8t(5iU(T>ĕ{{Gm"Ɓ:X0Uhd]~]Չed=]2Rd?> 4;y)gOަxݢXa4abnݦl-~j6~.W}7y-_}Rs ,*T$ Z" JmMε $ zMs73$JfG,ε)A}uĝN S_&T.y >J''ߘKaIj Q3K ԇJW>S!bVR&.x<cJsB}`%N  iًٙZ0~.(S;( 9KYb`&0,9qS.?P*n{+U].qt{ܯJ #_!;ħRڌ[qMZht+iIVΨih=BFTymtqVEv8d,D,/&"SFw ^ԧFM $jAy_7N$##tJչ2(jN,&T _#*3|/ }y{Kn b_,)_En0~#It-X޾C*Ƃ{ QM5D\ 8 1 RJ3=\TkLl&EܦBR)O=JpV 8XK֦V+"bǙ3<eH%D"`k%:#92̥BTQdI 9#[na7^뾿 V+L ]W6Ɵ#7/ALuO%]HxCR-yf}s +:+x ~>M п" Mgؽ`<7`QJ6ݻ2d/(FljpbZrN$˜ $E7^= ux8$p J"5.xƔS@i4N kSHRa۔Y?qEQfBw\rV%X5j>SH1i_cCڊ |(k+4Ӕ5(L֨l)*IR03>S/x3`0B# pfaxEm7S2"w$V)YrdgME}Rl|OM|rY'W!0Z0u|0潟&rZ|_)խKG[_<NzOg)3_Xf8O8|2./*_|5uJ(!{56'0m{R6IYj3]u>8a4_ԓ`6/h`Fo']N `'z)ƥ; ~M1QD)`T1L>Gl-o{U=a$ôd3ms;rN%\”QHYL4VIB=Ij cnM)XTO*v)0ú(be;[!ҷ[)/̧u~"hR6!G,褊",0ʅugC*K~;Śc:2/{YC`A 󲗵 ׏ ^&wk'Ջd+[a.nGLkgi|YHz߰H)Oa;P Ho9nnmdž#fB88`DH*[A@7쯙M5X#>߰FQ}C;b 6.RJ7߰"MFσDK)(WKCd?FFR⣡6'F$@tOfpz!=\L`c]BE: [d > xYqS.3(a$=UF$Dʁ  NhO)& #$7QѮo-0HtsOGOk͵]nۮ߇/uqѝ9 9+wwy 2ۻ_u.4~+f͸&yޔ+V_GB|7us?עQvo.x₯n92n,xa]}=apOJ8⃍3 hAn曫s_8^>+g9ݷ)t|_U16ZK1[=oxzͧB,Kuu=?MJ}ѫ9O2P-cv$|l:X{LgCDNV|8~y>;䄭K홖`Y(7&,LlHs.IC$2lf%v?NN}"YG,?~poο]mOƣݼHٶ}?Oz-96Mح;6oϖhz5C;S.n]A%+Q{}> .+ ;} -C Ҹ$쳛ܣPn;uozhWS :iM0oyƓȆ}-!2v U$ZBu^f}f y5S?iO.EA)d Բ (' 1"s_>Zoh5ʔ)IzlTQc>xvkp1/\]Y8)v8@lsy1yuXL~ ?^c aH 4 )8 GxagU$;U"[uZ]~/ˌSڡEuV5M|T٣Vq -N(Jy8ՖoYEf՘C bF,A2sjTekt-"C0a8Մh;&Sx{[%$+NXIZZoJjHPkJG,W}!<@bKAb):{ :d ɰNn] lz,ӻwKҌ%(|񑒉`r!&OmyJG E$Y(q=5bfk<>m9+toγN竣U/.<]+W^:_ţO4yF'g}{8`:5 coRL u`W8f fqA%9Δдvj\T9]'cqWfYF6Gc_l6HRqGrZqᦺ2zZ]e,Hkpt]ᨹɎ8,j͎ٙFҤ8Rlc2r{Z~G2*QUPvu JVg1k85Q 8š$ѥX/u͎nUFd!钋EI^6HR)Z)rI=b0Ly)zv23UF8fN}U2! 8% @B2Eth2NȫqYi!fQK ; h6J1:6I"</)l"DFªZUjcJiJ,U>Z0XW@hEp#13#U.R^ #ked-HsČ#HRUU[#;IZ`baEQ= kFԄXI)Pdr΋X0!(f2jbK;3(ws).y (h9#Z=s:(匴dzE8Ԇ8e8戕VIѥ3c8N_J_pOT rH!3b!`_ٟ|}ò780\Hn=oN޾{qY.@ViUYWO;Z?7RKl7#翝LJVg֍2Ot<`)84[G1ŚC26,Y=>% d!hё*&.WؓLhg՜ s>oܚΡƓR9MVQG*s ł tC0ǪX]R>l? ƖtMKA2h&l8Y)'LL7!ϟYF}Y /ʫJ/jgLECƩh֒v܆E#xmL+ \)flϤ޶"-;“ue2YL.D˨0(]N[7kEQkR6(gBK9;& OQr$pBﳉ/ DJڒƈe{vY?Crh0W!012lN:T$%d ܶ(t5?oHF*ĭ\G1?]@:\=sZ?(E羁T丆lz&HXi 52P`-OEGcQF/"e[O(,e\kЎ ;G'K.XS$m2su= ݘ8tKVl-F Y|ך8ijP;v:ř#W8ٚ84B(Jk +qh!sl7'٫ݫ<@Y)kz|s0m⨨<N6P4 f&n؎|rE_ujߋ^e;eŖ~< VR$ .B"tC sT_->@W[v,i^k o^_^L@0 S}Mb)< `ͨM"K`˧ߨ}^58HZ%"rw+R|VZu£a|E:@ΤAiD hj-<uQh\&5CkwL4.D;lO$IT m=X..gWvPxzfaNKD koG$Blδ =ݴ(.gJB:$> /WcDx޹g9 ZyΑڵL![ uN0 FYi,({-Wh7ܲ:>I2_n f#냛G?uZG`fB:^_粶3ڥ jIۺd>#<n鎁Yu<@6S =^CS c ѳ̀/5]=Z*õIsG۫dLѳ'Gu;z>7 iCoz4 uyC=̝q{wm/knLn^ ,Up^W,!l`'Òm19$S b->[vՓ)GL9I]~УQ!&e055=\%C҇Gx|iw}P qRk(xLRcl!H,h>*MÆS4G`S> :Z޴KGW#A5Z{茧*vnܚOj1v=%G0hUB ''p`Ms8!) 6^ils$;Mߒ1~q:=6^'Kϝ`ge]S4/Xys}=u}ŅGuSo,,}Rl>^OjZ8%o+]d{Hg7`m 2փW݆h9 @X׮zTF~3HfC*IqY|&k"U =oa/׾޵6#bev #X\ew1Y4IGn]˔%+-[3)RUS#EFDîI+T3.܏d_p AK`^ T=5l{D}ޫa>u*IVd'<t2j{nqxm WS. ̀D@="ss\Ͷ@ ;:t6jv_G]n趚 w_&|%K~F&a-IYwMO_g==[]tՇ!e]˲˗]nߙ >SWÿfMpQkk/_7ח2˛tc&ZV צ,/fyyfF݄+ Z#ty= YУ%#@;E2Rqh?mʑe'=#'FYܻ]QoeQ7xTh)/_J=-C`Aҧ@(-D"I "@RÿWZBcE H鏪U bU[p9eQQS#,~%'VP:N/s2?ꇷ/=P1!$ArƋr /?]qr Cd{c׉5%Wl 1 ~ס5`.=#@ה Zg ĺ$t]'ϝ\tu|~3֑. \OxS$Z:cMvqs haۓcoݭť,@hޫD|ԗѹLf&4Rb,ƣ/D\SS0֟-{~wt{xl$HBބ@\zx$GuTۛ8HGdLFx [FV%Z4Cؖ1vطpk 7w@Yff61x^*wż,~~ E?bDM)lNjfUWO[?o{^ZדFKj˄.l6.ocZB.[̱y&ZX.7F qjZGuEEmm%$L( ju>?6˫.ưzH{G#MP$J PH{h)Nz 9Ǖ;W 0rBBc)¦슞)v Ɩ2[kc}9 [5 ckwv -O9p! *޹6KISnu>t,5載"Jv?m箚`z}aZbdzs{kp Uh/f7]WvBDAZGDFH:|񲺫6 A7LaP?"l:#*ƀKD-W^^/ ̒+Xx! 29pƓx|XvZ)+G2mcEyУ8f{(.',C)F^ fg3.E)%#%~L%!c05H-)&,HpYuShmsx~uªrUǀKf$fv"ώf<fXeVRX`5֡n(gtWme+r F2`1a*J-ٖ1V3zLbǥK 9d~X1POd&[>y[=(փ=?Z )|3xdV`gv~v:"fwsX(tu#gl"6J}ʛ/)AP9mXZCQ[l+G3q Ά٫&z)O)o @!T׀곹I`r*i(/n}NlcQ;n̷ F6h5B{RDZFYl0 :7X/t@^$HEDW'5{8$( &i ЂT)mol^co֝^-{j?$+};*mrӱ,U<)*@H4}a,ަ ~W kxA55sskٖ1V6%سhcǢ-y ӾM!!M;F8'pĺJ G*l]b[x=C=zcwZcV3Oa:j\jN7M5!g_@|{8 ]84@y(Dk5SJOPquece!l 0ыn_N7h6=\u HǀM@Q[@k0'[!9ߐk)%@!f|3ZIŏ<\k\:,r5h~V(jjx9Ɗ1o[(V_t}/D $+U fVAZ ǐ"]{c9pVu+|p<*Dje9ݲ-c;< 7f-yYn )=la1wLyl6|Eո= @]_&8Z"(D@7\4OWkunox?3kwJ&01e iM滂IWՈW-.S"g삂}z7{|򆼮={W2I̓BYTph ȴPL-yY<sufSƘy{9YJH* (P 5YcśT lZs5w6JkE0kV`P(n=cE`]ӧ7b=TuF"næOOGj8B4aZVY2 vU9$ ٖW[ecYˆłBu~)s]L☒rI ak[䎭0 `rd2Ĉ9p`ӻ~9&_͚?=gN"]_Ws {z7}aV_8]_Y}؎_!-Mlt4/&7M?/' Ỏnݡ bvшD|TawH@nZso1hί|z|~;z>~2כ(hq](!ɾ'7>y[?Tվ-?h={^3p˵n<8BgN$AHO)b@iisf,6F(ʒ[ZPP1sI^۞q +ߕ&|3c fhU}fO]6=<'HN݃M g\Feu"EG #lkC 6cΏ{jojv?{”mHB*fg2~?G=zdtGHWhP ~'x݌I{?]=wR]6w"Ԗi3g:Ay=ڱl˓S ljU4*MwQeOVL#V>`X ~p3l -Ѳ5%7Тp?Hh Z#%zV -o3"uSb0nVnk]eZ#Ԩe1j$7+aZ{X. ^"lRimr?9}Nwt `)_ ʓ Ҝ~er@cXp?'m*L]^-1VX.b3c2O5ڬ.pyHwX2R""W 1OdS(zXx )cVYV3=C++DkR|sJ2 ϬDGT&盬>k^\~R ! yUƪ3u֍G~3cX?C NG@Y.>߲xs>#M$m%ڰlCN& aت64Z#4f C>K.MkX{xbGaKs#aϨ*YV4qBP)ڄmhgo MRaq;4-^1-%š 09ƽ͉`/7E nMʗ$ZM3i%ӻs&]C1ƒ]V hdԺ秱[ύf`jxuzlBw$sÍf& A,2=ǎyqOюcTZE0JbMQR8YT[z{_6[+u m]pU[UjGvt6lqa@πC++ũF莎ZPܱ};85 }xJpBK{4GG$Q8p7()rǏUeI( PHI_ ēeկ$/%Y+TkFoOWҐ93YNdKdțjقu2?SU)q<&3L(yl ƢJkQI"ttL WԪȓb[i[ )Z뤲XP"fVJVKOf8M8M* Ī6T Um|a>8:(ѷ-F{峡zaJIɛOltyK,t*-KY ȊS]GJc@Td9ՋfdH*X7 C:hH,ؽup; :<ɢ $O[$Ƹl};ch0)9Bָ8<l|'-$!ig9b<1DyPJBqe]ɛ*HZɧZd~pkBy+tD-j XPS fr=`'3pZyrcRd{ZLA=RcEZoZJ+Ϫ URU_eR"]`X.UhttUAZÒׂZKQ˴vp#mrϪGIL=%t?9Y@@kIX\k9Is{8"ZF+0@חfI_6/悓D3 ks#K)Ks\G*Ux& 9[O6< stIFd,Y\,bˠXM/)Ֆ[7$U)pE|g8 eb;[[6߾Y6/? nox={[ou666wlMߺ`ߝݭ;ă7lqgn_޾d{}BCۯ;^a^taN;m;]ӳvB`BkYذڌ{l{_׏qp^\ Ra__{bl;fԜe(yo3{{y㲙Ei/ËWS`ۓy{48vt|= |/X?s] i(Fl֗S`~W-_<7b(RA{"O7rps|q7Ƣg!_ */ᨽc)b)y}T~NE>*yo䭷Omƞ=sa&bbltuf}7;.AϊQ88p&gx}也?BcwtSWJoR,j;x_T3P86QgnvzØ!swwT$_7N\u{}P^M]l=7Azڿ7o7?xXv'g_ޞSũo Go'{rOOcUj`kmo,+Y?gF7)\nn @M?h[)&|Pho@5V܌YC]En@X\v|” >Q|亢#͒YI19ڹtʤ%E#+y@%"j" tŐ)m~+~o]kB?\#1 o\ F/q @D& Q%`夒K he BXPʵs78/ ^'7vvE c)B >:{N4ry7=Zb9t-Rz44hFPƔ%T8#ڀ:CP4Hl\$ L[J=@Fbj`_/ l``XRtw`GH C3)#AADo.Ă55ٌ Q-ɫ N@K * f lK‚E= 6X lZĂej|>z*1t deYo ?vOOhkŐ\ZnڄZp\F[}}zYmoV&@[Sy =ټuE2]UN E qE2 JS=?]d5EKxkgx׳//ivAe!tCh:PUTY V X0` BoMr8gs3 2x҄'K O8s|$DhMx O,/<;#.b ($.Pv._@1g0WU{岔*D'18rOp f&1q;*iDHŠ T@d ~IԅC_7 ^3+ӝZ*K hRJ$Q,"My`7Q.3&$6 `YlŽ'$Ig(=52y6HcQQis׬`L&6IRy51\F˄THs6Z(HYUMq-d,%4+[jE*."PXye߆FWgN<,<1"5L%]Ade,qVD#Ih %h,2 &bOw>a/i`ƪɭx1Mf%_RZ`QY G#E:"p$ˈVIF@gYvQtu!pF@^࿃Se% 5evLr*gS9Mr.Eߊɏ+`B5nJ)?Rs )f|p j/O`/+(ꓛ[e_+NrVBʾ4{b)抰[f-NXJjD)3?!;- ;+mQAL4$gyɃ1]RV`#dž篌# M< G3TQoִS| #w?ˆM)I\榺c8tˆ)C\(5+1%5P.rI Kje$jy$uoЗ(%B0 <Rk16e I$z!6ǹLQ+A##@.WYET/Zh yKR)E #$ @]`/2"/.Y:F==3D7͖1`Z)|wv 4Q # Qn`6:W3ZIl'FMMqqi3s 0ɡa:4VHR$,>⍪R$A'Md5T0U rCqL^ rKyAÃ@%+Z(|^Ze'd:6xŕ~I=G'~^]Dۉ瓕>{gG9Q;`BxkyVuӋ/"bBɮ"8uB}݋C{գ#ϽnudkxەC#DvH:F Һ]v| Le8_5I(8?.V{utix>?=ox`@UzcϭP;x.!ҠK[WnpЁyD?A!HТ`Wb" +=ϸ^?hn[/^/(Z$fv(ʾ땡mKXvVjO;H^|v]@\\3Eΰ)\fbڜ1Uy oɫE>K֧^Ց5qyN_G"-1~߿jQ_IX_ûc.* _Ϳ|Zn.,v%~sɚj g ϴwQVH0 M*oT&E wqo[.%|rpÔSO曡 ?j13kr;DžNv-P3CL J{Җ>CsGѱχ Hwx0l ;Du2`0Y zc9mjThx/ 3tv0n ?~8uav0B; +E!3P"lsI3bC5FnSn8hl:ҳjPf{]=Oګx+OO[i% ً-yۉ3%W6A7:oyC^: O+gqS[6鯫kWִў:%eoz[O_EJ.m76QۍhQz=G}W^_ 7nh΁ r?ҭ޴-ɯZ3V2,mԲZWsE^앣Eo7z't^=%q/+^]\.w]䛏ne6Ɨz<8WpZG Ya)DxYŒlBY(獉NN [̀k81Fc׬{!kqfpڣ&OY|njqmlskZ0li DhF(L8gl,"Gλlnۂ MVMn}l0 ;q[^tsoerezbZ_ׅ,Ӿώk2ޕ-n?,Guy֑uBVi[L}=Tc9޶kq*{]  ֊A-"J-U3Ycy^/|b9#=a]La9#Frܞg.GYt$mU6HG-\ruF/>) Z5hHpQOX9EiPS\ܭY [$v P|c A0>w[XRrwʅJAXLNɥI{i5'qqz=<y~='uYȵ.M4 Hk]X D 3%x :'t. rdCY9Qi OA2zG1(TveͬX6w(?6̏c i8cF[PcN)002!0=qJLkv^RKp8" ́b2|:=ʮ|e $LHek/qb+XKcr: W>+L0OrLb?L^WfJE$cm=zU 8KqZ'(4?:}\}mh"mZO:8+# b"&NHmʩ0.9(9f A Qf/Bϓ:? 9>`,Ӈz*#?-8o9fPٙl*j[SBi',˳%K9f2mg>y=h$RX4niq eakWhF'"$IYxrn C;E$w~n5I3nBa\O}kݧ:~wvnXAVܸ벼,Oe;I]mnPYR' ty&jHu=y0{ 1%x;/ xV^!.6#J,]b3SB<ڣahĉ0rJht;ܢfU:ŻR]D%iaSr,^mcLU*j>.glaDzCB"Ĕ8rF5$b5a?Z9P\JkBkc㒆R:^oueh)`hG"^܀L8ܾF ,veBH⚆x;/(` nc a򲛠l4KY>[ڊђX@TmX{>`ˠb@PXj5x.ꭕJ:,n)7>2+ml6&ֳj Ԋ/݃'hsC8 ڠC Ft$j<؍HLFoB_&U&9,Jّ46T`a-'6W_i(Qb $wُb=qoEQ4z~F rsdNN?=М{*.2O/$AN Epz՞)G~EYm"#PT\WuտT\05E=aI= 5zBז񶀍B.>q0T\]V,U$x ս&/(ϿHh%bD4g@R1U.ަ,T*@<|D?|f6?0QV)?"D erޓ6rcW2G|`@& l6_z"Ynݶ[ݝ},R Nbw_>}Z!=HK󒤮R`IRY|PH- $hKEd eH]RBuDɵ&˄oe})VҎ~nOJF?oݜ"fKH(Qצ,"ɀD*EQ] Zt6ABz5p,с1j 6ƢVb#ܓ-J{\3x}@WnP JATeZ̨EB \›(pc,-ա2,:=P CA{EQ@3!ޔw5PDF=jH0U}  dvKp+FvLhAlɄJs w S)}ƧBbp+{jc(u Rr 4̚F?mJ#0<K‰k;pZᔔSYJ[P5=(㭄R%B Y)t>PjésBpr:e*X*%"&='DSy* =z1ΔQժLjt\Tb􋂾֎ĘVjKK#uk<Ph㍽z} JTlȄj K-j3^ڲR[kz-I$n1 $C bk%R*^7G֞4(ZTPTr݀'ܪږUEրkL⦑s6{`V*@0%Z V!tc|$o漅{ H2[zXs4Mltc.E`3iws*rI)J()E@k,ԭ;Ӥ;Q)3_e#t5bsT#ܪvapk?CfF¬ CU`fGtP/Xʙ6)x[ZGWE]TQKf0BTd"ڥ/|1}qf/jU6疁D7V 3WIJKtJ@6H왉LzJLSjy`"{OtRy.SSoIwp& ;yO@s2<횮crS&] " DoT˾t?M;{$".<.TFM <2gO}y#4Iiwg'?q~wX.EΜ 5^L^ .SċY=R~|F*gi뒪џEV ۗATX|D|-&嗻X]Gi02nxq}~[\^%N f2wסJ;Q'I-" tK\Mu< H~kTDt0+eCI?$Y*+4\*.t, =#8E:!ЂS+ C-}p!dnV+~|oA4v4͘֎j2)]pZ735@,ٙBnlT˾J-1 d-]4ZqM]`J +x;›E\Ҍj%M҂2:!ػ C(JS|&6&Hobd6⟊I+@ @ >(f Siݺ^P"FB 6ˬ-T+ѫ-k(ރ#iPuDO#`*J-B([^W~WE'^ڧW~'9bM LKY5NsRYsh*nL 3xzeKQ?2S`NCL*i.d=a%i5euzʳVqMR-dTkX,[{(~PZ^tXWT/%33tU.u7_n=WomPVo~#diUJum x0\ Q% +B5 ȅr}^YHKіڄZX;м}]1rB(D /†{5֖s\BFw> ^Caqơe6Wi壆MsY9T;UAܷ^2TD5ɻu-O^Bj\Yzbl7^1fbYFX F ܺurT@ȵgL:Hpy10L r9_|}LFmW.e/atS@?WWȹe&? E7w)Y M$vRn>;"y Am'Ȟf/=^7E+pa[pdvsL2ŪV;$&þ 3yX,/DуJ ifvg+"u:- 7V@*} 耳mEm%CkmR'Um߿'"<{ kdK䳻=&RO"HԔBY¼s4&SvkD,ʇcEe;PֈBO_?]!1[qlZїiy/LKadʓEx ef:FN*'QnL0ř-8~F1{w&K3Ǣ@hOmIw{hb'V{'5.&:+@Ձ܆Vrqhw{eQ}7{  ޘڧdY-0޵t垼0Ly1z70TJi!h*? Iʷyɇ,{|0Rt2̵نw9_Jƴ.meEmt!K(Ku!(YhڻԑQBژZ:p ǵzx9V)k4*gJPT,2RBl\5J6Lh)AIo:DfќSח~=ǟ~L˨ NZ%Cڡu]PXORXIp Uɡ ei6rb,.pThpdV+NWٯ;]I`V"8(N ?",LP3Cy;5Cy:!;tB 4l(.[?MQbeۤOIsv7PJmf^6mlI=2@} ʕ  :P'! f(8 ?×&_r{DFXSDPc a [Qׅ`E|U}U)y,rΏdb|Z6*NED twW rm+~rmmW]_(| UHAZD)1FoՃTMez-@[Xм\n_їt9Ȑ5G'|f!y/`-QDP✏XJy>ѽ٠ӊOsdOӥCڛir,ڿ8/ Wc"? _W ~]A,#}F$ o36<~DLWXP}L  x`>rDLQ|@wl_}L Y\?lH&:#*K,22ի2_ӏ7ŭ[|$hY/f_ǎ]aQޖ-8(kب^#=U2jּCj-&ߍ1Ъζ1AuZ W9Au}S}L8)>D-M=&@wr` cpvnPڮG빕BHGҘ)?j *Kx/}cDcEiוcFcd̕&kyU`/cMoޘY+;6[/+ dg*Ӟɯ?~/?xw˫dɯ.ooߞNЯnIP  6 K|NH[?߻eqspI7Jx!JneKAk!iHuYF!zK#hDM*ޟz4e+&ݍ1{-~h@>^].ڏ4Nm`wVvӂ F%Kj͎y!\*;htH(M2AG@6ݴB'sci}ݙ mS`qA"r* q["%$\)UK8A9vډ`9mGˇb` ='3R(( DY#exu@w)fݡW2 %gb㑰Qe1$88L~s{3uH2؇jyuclƱcl;!n-kƚ׊莄AJ$,P:VXe125^H)+,G‥qrf8ӓ J9{U~ v$:6d̦w'DRhȓ#57~Ē 7d2M}y*^(0uRz='b`!UXOy͘Ao-?Ss'Af[;ZЕQ限:=O"?wg% 5/9. yRQ #e!xE%NR-|}ngwCs|CWXSdj< GTS=t:/+GhgMۊLv? w2Yw2%50{f> qMh#`M?7|sui[lM_ͭkm+H{ugy 8+pVn:)Pc\m P)Պwz>G<3A(AOTH@,O,U4`x)lh7bsή¯F|>Ÿt"?;ņG ۴>z_8.(vOo\q1EaVjf\`4)ʨ |ʲ /cG0)Őjm;ם~TPyȴĐȬ|㨘C^SnfOFaӺ~Q1&km(!&L_[E_cjf<nLZqff EkšNyЙu^2UmlesnRCT6 J!7BŦgwU Kʲ`wmq:LV1$ǭ=?cAWg&~%;v_IIݜS3'vUD UΨZcmkL+趠>jONVV0K\| H7+ ' N~vOOKtŠwrOB|WlI {_ #2+J{h#5ġ)l YAFǭ~oӕ;G FݐX [խ[(j c4ʲ_e## e;X|JQ͞1V jZ4~ybh=A?t"6々ړq 2e+PF ICE#e:BeX)ukm YhL[%,Qʻ%#F1l_u#K~N͇Z7@B&a0ZR@YMBSd]5؊5ݝk@`Rh_P.>:A-)}PK!Ѽ5kb1XD탒y9iy :orkԌwSRXYMg͎1iN']`.vc0o\.7+VZ )~C_od!EW,PCZ=c]7!v7 (@EvY{DeEvP:[q[\uENһIF߁5.vSŸ.j%o'0)&Iҥw=$ICj@4j^|AscAvX6 CK =ZfjQJ5iґP 4Pj@qlװrϠ;HN]'6W =cTzm/?{f)y3R{g6f ?~P {!g)Y㠎e׎9M#e3s()6(Q˿A& #e䮟9=@Ws~Qpv&qh" PSj7.R젏h|>!U>BZ|2 0ӱ(9[*2 *@mS\TWJ> F;ca|9 +'g*'`(a!"'g/vH:^XJgߏS/Pm?z麿x|s]~EB|vy>$d2C Uq󊦮ժ)ҕ)h[-e,n?m|ײ[ը^@E%lgq\hoI0 r+8NT'PR*{/ooȦ՝2Oـ%pv kS]n& kMa?9f=$Q*ۖyS=\?NRug"\mv?wJL>]lÒ>ڀ9A(Z[ifMa[T+tۊVV- b,ic΃gEXgbq/vSA6{X0𭍧A<|=E>Wצ|.a/_]򉓯ޤY/_^p_%ЫRMqswiS@pBZG6>PXt9ִ+±,JpJ"=ka$z|"X>$ö5Rh~^C,8ٜ,OFJ_!${X-@< .cAVrǕ2gٷwfXaj9Ͱa7 Cp2i"h:[&MtN 526ddmrT! ^<],u;~Ӹ컄^& 3ghx)K5N6#gu9Ad9 8N%G5mh:[vG wjXYċFΫH `*G`{b)dUmor< wLX﫿d*N\fT΅. P(Bt,`+eq"h8סq,{X"VDWى|Ծo-eq:2⪏=J7J쬏!u.ӯaN}еqY^Öb:Xv+ұ^_wY^: er;W;i#c+ :-蹽`ygb׍Vj킂2ܨB\5uaTt)ˢ(G @ԋ>\hHN::ΟΉ 8 hqf&~2؝9!y7F.R ոFŔRͭ$=|lK=Q:IxԛMkQeu)Tu+jj@Ai$KGuΨ``K^k?x =c<< X &Z Q)e"jT^c"U⫩߲$B9Ca"zatdXzd+]j;o!C/P27>7K᳟a47}ɡ@Xgf ||,^ 3N*:"7ҪER|MbmM=PloY5)RX  C\VVLSp])gFP|nD2dm{sA!dߜD?g}ry4ousz؝o^?;o:V) a\Ve%~_ګԿ1rv=a锌 =6/G'SqSoF6>a܄j%rp_MLD^ͧ YDA8sBЍk ڂE-TwPhTR3kH0Vg!C:Lex}bGȹC]!sr Ꝋ .b>&Kj_ _2jܕвZ%ҝJ㌍'˹LiqUV&.BhTsp*W6s83%/2qѨ,_᥂HE| uc6ۓCs0%"iz7|jTE5\F03@iRe72E3\빳g>x[DdmBWfxXfMHx"/eOȮTn2Z8ڥ}SA4'BV9~џh_0+-hmBPC>X0OX.qChdlbD>k4fnaVif(j!O!M$ X x։~(C W2oc w1h3OJ=ko:bBM'5_O3W9}]&<,rHh w"{;Kc9/A4oED9 eL PŷNd:U3=m%3x(+ ){YTōtjb˗޳qtW,tV,a F m/mPH}j#k؍ېP~X 5kU>fwΞ39gv㕘*bJM6 Ra2(ȁ "TGS}*'D%&]HMv!U˜尫qIrO wZf/Q7BEU9yUh7o\ua9/*,*~BT |8Ά=*UED()^JЇ/ϛ<2V^i7*creo :EGXsXzLqa\fk F^QK˧PYCQ:*2D8 Il~bye2U:#Ew}*68劒@\$x/>8}Xc`mxv7޽VI)[q]Fb(ųdBs)ծd,2ʥq3@*i3視Wr390$[Lq4B+IF܄gi$6ګ#@Tn`^!!jo%Hy;6ӰMX\E&f2m/_Xz "T/ygJi}b>Q'@n1(bSN~8 \tG>jpo Fm: xaz.@#?8vOze v[>V}߉3 ̅޸?W*"녓Lat'!.B|ֱ`RM}c?`N3οt`785.i.UO~rxϯ~i?M$4{1= ,u9؜Wjhtts't;=燡*?p"y{`5aOF;0ؠoC4/r .;K.|x~Լ8z>tu3yz:yEt-/S?:5KjP:=/+b^9~)6(Zx}xbps~\H({sGpQL ѿ;5]Td;m^__@ې_MM:/\8{ǝ^o D`<{{ AqgW>F!}@ ËB4M]8a&|v܃ׯ?v|םɗ7ͨO!$8ASN GרJ|=yxw.IAa4:܊r<2ltg4ܺT?hICeQ1]fU"J"2NLr60ػ1 aÔϠA Ei3ɼpf1 lbL8]`&;"e*09ZjRfN6s"JObiæBKu7~ض(p<NKp f zcٶ/AS%\*|"9H.n3>Fj'-.d!e8x }|]@,η^jnqmALe;p14LpVTr,Y|pa0PNS(6qL}Z&SԲ"EXGOSH귾>q )LH{nn~ 8Hs}Bh|ɮP& ص'=O?@͆g˼3!RןZ Q1W8pĩEV A=nOSh&,Fe9"nGan7b N,Kyq;:e`თ`\*V8b[N{ٞcѝOQ58I nYHT}w] `l>Uf\ߚGxuaU=פ="L[0m?6Rk|J8,]G@^R'ckVEϽUlkXyesE F@Ûq+h2a N$VkNU7e;QW\[ۍ_:>n6)U^3"ML*]_9.:~: 8[Yb`q\1 >>1Ҩuro'"ԇJJSht1"AzdL!8K 1uBS1P+'\e{P eǚjfڰ \AuQy%ߺvpa`l`6u`ܥ.XMxyHmMU]8b?4bLCE-B䡷4E{2e(hOi@ 5(BA ) R=$xLP16Sݎj ZHmŨp@ZxB \kペ`Al 8` (2ƥ HH/0 QQ9le<zJe6Bw ՚ ($gL0U uDĤڥoa#s !JgƏšԌJVS>(4jqcqZcC9fdԵ)V 5HPX!"&i+T&h#-w%[}eM]Zn5:ặY^9 \/! fHEԠ|" QP-,a&N8:P΍{)BoZjrq|™C4u;u;8sQހvqo<e;츋E=7+R0pd]jrQ.04kYuy7^v)uTd. nT ƀ\A+Σ:Li Μb)usaNR%ژ%:.Ҳ,ee嬴*iujLcV0CwaʹLH W'VY$/'l 0025:RÚ`r ,_6pL*JDTFy2v*vFX\D05 "<+FeD:j 2bR'/.xFֱʁxJfLBB̘F2.A VnC$awFVDE@Z"R*Pv]W,M dib KYyb k! ꓤDQ6>γdg:ϒu*kAB/PZ Q0aK2nLMsNĀ3o x3 QZiUhʌ4V!G(4jŖqɥ&2gTJ@pCy-&rwCN)LRf4RшZX/3[ ב,/{?%X4UJk H &XGp[ Ѫ葦F0p=:cÁG 8!:D7enS7 zgMR;KAaZT%qaRXBJu6cRIK: 9ĕ \ 1i UtPWtOo- ||S$E1K)YJQRbVNQJuj`RVyYKUhV=g> :kYa^^ޟ/4} B$VS\oo|$()zµ(d.LЕhshI2)X/;|QSMF%#6fǾTn 3]2w \Ulv|kNʠ |lg|یο݉Tn;ڝ; ۡo6 PA_(#!RiRC#n>Z LŶBkJ]VLs(ۘ(T(`hbPg&HFœeew T)^v,M5ujm_kդR>w^M_4nW@~Fv0qx7xK(aehz>(n처Z^Ө-0*E] |#YXhE%-S_|QE6qa㛿Ȥ aIlloxoJlcNc',f/e[ tY%@߷-w^1nFp$=Dʻ(wmFYt*$J߻C<2gQ2O㈋fiDjcٵaC4!6Ɩlo~J+cݻ6rcPcw*PـT6򘠲1ɜ2 J-$8D41 GJȒ7fY;kb5ܤ˥4p[}^{<{WRk+HZڽJUɻdTi8DW[3AfSdVoͼX6׍Y(dIƟ0HTոx%|~nDaF}lNI;>_B3b6٣m$˪q4&$HM+14JXK"M0gt9?="}y kq:e7B3#qiN5'EɢhmNc_A:W85pLK|+&!ÂO%28FTfٹY׷?ED M⼳SD~m2Iv5HBRy(5IK d!McJIp{mZqH*KwBIi+ ]=mJJR[kFh =nGL i+ۘc# Z᠓bQù,w:IXz]sZ4RҺ'5:O(;|Q:_9R簜8PD9f/>*H 'aZ$A$*8ł -Ö#O 4Ƙa&IW&0b9K=Rwqn@4wVRB'Q'Q'Q'e^cu`-c+.r b42d^%ʢˀYQi-BKgN2)Q6ˍR9 SZ\){_KX~.4% ljFV*Ry:IY,5L8hL4q,o\BRoB((L!d0 ϓ#Y$%KcZ YɮlauBPWG$Rދy >km-L"Nq؋LP) k` kc:K@v`x sTT!k3`)|ςuvV:V)ٕ4TP+\E!O3ДWqlA8 ک̈́ҝf^dQìgRih̵+#L+(K^roф0)(%rf.gaFKg~;΋n2hk?æ ~|6$tx)_MkXj4(qf`F^hO[g,$jժcH[Dx Np>x6TR.|SXG|X,j JMQ*# PCP ct@+m hшSˁbN0Oluޑ BC PEjM)#fFvKӫ֍ $ Xq-%[J?] Ә,|zf;%m! PCWRWA9ȾE*~4.鵶{4@eݿI>[qVj#/$Jƒs֚a2zZyiZx0tg,>Bܫϓ^kl0v<*>LFQd~ wk1Ij^bڝţ8ɟv~2e@yycٺ) sJýH *%pm@1XJ22~=?YrA;?n #sr3 Nk(YYIo6L<ѥ"V8"e;L!F-T_v> ւ (ҊsPNG.8JgeAi P%0>?#|kS^[@O@babUo@+lFnXuK7(<tCtl 2+%ld;NsX(Ű"مdP( ^at[gT*f dhʧ= 3`c[qeg/RKT\Mlb^| ڍkiшQT-=\]Ϩ^DRISDRe 9A1k4 ',I b?""SADADADAaKU]ѱ8?_XM}hx9g!"7LAƣlP&|MVTEhfJKUJjybRboAJFx} fRGRg M 8` q %E քLU UZDĸDWټmaۙ2[Á́[;η[Oq8?p 8|E[Yo+E>̕?f uW-"kϴ9~f 1~<&%el,( <eX.3"%(a -4'5 sܺ?K|#o%hg-᝵D3Bq ovMo]vAZ~çd׳`+o_nYvdlYHR͢L#|ynY"LZ_`<֗k 3oA(._}}rϱ5%B-4/Я= eo:d̘; z?= sd)9Q셤J:u;h!9HQG8x#LZto}=fWBzg6LaP@Fon2UQ>_DEGWMDSP8 a;y'8(:dCxEp 7iq*!d/dpamꜛѪ,Z"gx٘}pق& WۄmV#ZH&Zl5b7FD>4bXM&Vl+bۖIJɏ%w3*CH2!A(0ֻͮDSuL$iew<ٳ^O_eԘuڝ{mvMgϚY7KQ닫o~W/.߿x雷\];n@ˎN^)xh~NE_^\,H\ w&Cgp:å (RF PA -XGBs.CXh;r Pi7>V2"&l$ g(s8 ~L;Uɲ:a{!jaق>XU;5նù g׏`f=%? T]+dܐ['h4>wػӁ>ͺ=m wt/=٣mO?ߓm2&^?~NkE-g&e[)4Vۢx~ ௿B3,sϲms`xcd? >e^&g4I BITTB9<5luMz)@R58cd}{fz۞"N,B`{dg9[VmYIyF 70>]}%hm%zn,1ziIpdA! QgkɃ6߼y o 0>ۚ.'ؙ\$dEg}~A%=7J{<P*L߼>(\^7NJ|25p.$&񖨌{7:=|= H;/U(:M#!aD SGGhPD}jA!}#:ګ \N(_vڷگn}Ak_#\*m< %';(ĐN@ }=&gIɏ+ X tL=:=_7 P Q׃xȷ\cD(!^[d[~cv|.9ȝ!ܑ<õ w̷WaRu&5:\ҎhtBy ᱅*dX\Ù;߮^yջjukHcX^.*~~s)ozNrsR쐻3Zr 7>~뎷?ſWuuF>FİI{Pv/oq. yN߂VMqel*T~T S[[;JKSO!DDK vGY%[G,צ} $H)aZ\]not[)۵Mj^-%H5DCiPF)IF qӉEW'=6"{0k۷AMK |\>L Wݯfm/{\db=IL8I7Zh](S}mr֊C,'C4ur YՆOF-4<6oc(ڏ1/~uRjdɥ֜l QPz%'_@I2:QrsfOI 1 ֆmޣFZļ8AOƍ;WѤ"4wyu4 2]Honۤ2r<3Bj}ws< đ:◟/PԼye+AeIxцӐ?ܪ쇿~w=+e< z$o %vlgsO_l477*1Se1w?f8Xs~1b*a߹?IԐ@+MᐧT2:Ff-@f63{7DNk`u]JI'РJ@BQ tJсdGDPvr2]\K嬫 ;jfuه]/֕>*9SQ69 DIԐCQB"$%(oq#m)>7u1yP ]$P)3l6Vpey.7HJ)k5A"NP\"c $8P 8KSkx;B-7Je>x9Kݰ*Vn\T/@ Bh+3TdU@Zp3`Q0;yM 0C)%e1'@M;zVe1Ƨ9y3<]n+]vgJ(&K#RN(݆g^ju#[݁Rj^I2vgjkݮA9nOi Qv^]&Yȩ >5 kۂiqx^wX9aXJqNdi hA]a ^sPNg × n{{bfb (9lFN'㝶Dν,ؿRm98v `%^95z-1`E;_lQav)ذ@FQj"] Αj EyI r(ʂGP C) 2Tzݗ7g\ mњ*`)6SaeI" 0fVĀFeСvi }@`ѕmCsH{\ NgK&g:|Xk`qm#1@Y}CI6ԙ n= ŪGyJlEdrH~%ֻ}՗9Eljﺈ)Ⱆ&1^[|Q'Iptn!)*g*C"[vBq1͠0+K)B[5d9s@c[I$G v1Nv7^tCŸ5Us(̠ 7f=E IZ۷UcWtV gXm=5kf5߬eòEkP))4bVM&vqv/ۊ"= eΗzW%FA2pKbE"v-D D(5>9TlTV8ΒJ/dLQҒ3\pd+R8:^:ˏ4-Ϋ~2˟'g|\}qE.r=ʏ_x/Y-㓜n;?= ?/6}}t vҲoǀ]w߿fy 4d+u_]$#P7+#,Tgϻ` Mgc-ړ<띞,_kԦ`QvsjHՓEbժ[uef&nvd `qݜ $+R)1 HyfyZ?lf}Sg:行5T21ԩ?:Z#NF8Łcä]s[u^X.etUsuz|yf8d,~(IӲ=;ģ^|{> G:ްpCb=Xzx;E 3x h$_Z0ڬx)q??gߋңR #)R?PYqW Ԡ{kj;=6 l7.iaCzw>7B g_/o?w{"ø/ɽR{?Lj_~% Zn܏6^aVȓWu{3wۿ}i\g1tp9%}|q8lxƵqcz}lȕ y-wj%GupՄ ZY=qPpco>ϺQU?vA { _M'frlc:'trh9I>'heC1=-κnR⹏$t8B)h=t1W]}#c<Ӣ[qWV|z)p?ԿX;wڵx+wͬOtƇ7Ul)_^գ tcdvrR`giK|veeto4fƥ ewkh{ѭV JdR  [d)-Mc)'^<fj_Yt6>+ps$i/Ƞ#=Z|pd^F>0%eea(9t B\3xRl j#>S)k)m75:2k ]Lx2Q>twD\n(PKNdSޕ qm<BQ,OoQpuGn<>:Qv!jրL#pˍ=w> "$ŨJѫeycwʋMwv9Ͼh>u70x;ĜS_\̒k[ct\ͯm1a|E^;C|::cK\f5Vs:.dVXv:nm1hp%:ZO4Wu!!/\Dw)9m["e {;!2A0Lb)9 0Mzi"~8{(t"zٳ?) e*p /\ /,FEuS0-S,%LzpŒq`gG9[χ s}J%n,pKJ#_-SXo%ЙbI~K)H 8#beXq*V\+V8dDCJ(AA!s=(M$БңŸZqۗ,ƌe0O@8$dʒ43FQ;+*CY t㒨HiI#('옯(ȶ혷V)~J-Z?VAAOT<=E/Tz.~U?j wBɠ!jr ]zF =)d7z?fIb(P YDJ&P\]ZH h 5J(c7z0яZO( mz5rJЭ=(1uNSyu2 F)eiۆl zx6n/Sr9 ,pz|?7?m y?[Q!~~A/Yi5ĻI~ {uqr'?~|wc2-xa>Gwv:vqrfw|"pmP:R|?aGS9J l5R_1c(0=*+5Pt[R8Zdm@fO]*#a~%QʝzOSRNr˴U&78شنS4Ń?j\(U{7;y3o~JKY{Sf{OL0LΟ/ww_,:s23Q&9t^'g a|p5w6c{wUׯ _:.؄Ihؑ1i%ӽJ|RJ[=GCQab^_[0 iV7quQFeiHAzLԗ56=Q?@#GMhw(o~Q}͠7 ZPnÿU-`(x]!RFzWG]h Xpya=*YA-"Lƒ`2***K-D%֌z95Q1-m.H rU$\ykù.Ruhp\70Sb )Pzs]SqݠF{wKdڤڦn#B) *poEo`Q`%/]YMAu) u31)9E[A})#8huh ВAD󰢷e8+C;P١9E + %%K^F'CetR0tсvZPE xishuo_$j,uf(u\ BHmG!.BcP-s =GU!FiY36C3`S`@4\VqiY]Jq5M \ iKRF2 D@ugYϞ)gLɣ%X)+$i tD:6%"/zq.wjz[=W(ɻvxGnh;X/4mTufiOT5N-5?/}5}\Qw'Znqf{%.Kɩa%GWߪW4ע[?|/ 8jȅ>VUY/o#khA>A>j׭iwݭiF^Phpn9ukjQ#Tᯌevnz4r;O:҄OIcOz^kj8pylQò1ۆeM$A6k! IHy̽n {opkyy;Z6cA_ g?<䤍?)ڪE*ǩoTYeR1 t%' 0ڂn<{{ȼ`ts7o?~?d1y7^݆ZC#51d΄I| hwj-T&/wɝoRaco&(Fqrs5)[<xlNbePzԽ|6'(䣫x=R-r]xf)*~ϨnG5\ίU8[kUl2J[ar5%S=`&R)#rM~E(id/m=/p5ՄI{D~GMz-g4{[Uv]"xǘHDI}WOw (^]F."D`烇g*W~?..wgom9f_*5 ïdk=[Mvr*ػ޶%Ww?x"ٝ=BƚȗHrr2߷ZmZIY qdIů]U]2[懏[/|_t2˳E63*e@1UJ -"_JbĠ "է&7MI Aq>{NyӴ3?@k[{TiSԈӹa&x05baPr㴉;Xάk=Wj|( 0j<2@F!F泤/hѢm57x Ns2:ٗOFkB!0,wTKTwf8"U1}a;"9a휸9G,LǷNH(9gz-+?ckluCȜr" (z&Ɩ*TQa$K^U7#'GmdGduUb{r\3&KJ7'.^w*1fG${-g4Nj{ͬ|<(qetⰛ$wvp3\?=r]奝 T*?]ν N`o}m',먻JCryW }UپaU--WSbN{QCjAB* PvÖ̗8p΁R`vHVH9=Z!+,r9<6scG ڶvQx$>qϾm?4-Xש}YFجN`ݥڻۧ>xe!LxQ0aNw@r hB;A5b鏠ědǾ@+Dr~u^{we!ǿ/үTdq 8)Og _f>8iw2z?GI%9*a:ׄ2:wy0,P a UK% 7 6]dj<ȟϖe1^bNt6b*JwӔ/f#KAws0K.cwbM+ZRi΋N2CQw3z+1*S< T̙DQː^\1U9=17NIb *9pU*aM*vUN!ZjuvM2EUk*)[6?ՈlywyU2:,rW թ=g%.c*f*S,E3iY(BIа~xVFWlcގRpGPMQ V%.lˊUeIϲd6W)yiC+*j+jDu'yA_L6ի*0aC.$(%;K}DJwuF Pj~3!+͡UFAf[=e `_pS%/0F<HFV<oRXԭVkŸzsR蓺w԰}1JJ}ؼ f࣭mvZ<ڎ5f6|P[>Fr;.whK+ha[k`8# J˙E-G֝`@Ps乻9`ӪzXLs 2ϐ!/iF˂QJԻGKY]?n7[~zmMB>asJ[ q'CD|Moc|ӬW'٧pBt6_ow E'.??Ll2ݾwnX~w}MOn-4czP)i,CAp0КӨ*Ӽae=-Lk*"Y.zpZI#hHD>0υi-Q-1 _nإƊ1{Hx$7=,]vieM52@lޥU.N2K?-)-^·8?+sS^Ǐ%w7'&DݔB!d0:!raϴl0>ϮMY89LZvQףԮީ2wN BH"XT7²|V}9ȘHVĉ rBEȶfY2FgUA lyYMJZ`;jOu"Utbv Zϋx!3^k J,Iϣ`O=ɶEy"z(` Z>b i&Ok~{dcYBD[nyqQ* sDܲj6oB.fUxvK1=)PēDoރNykiuH(ŋ tβB,| 22ǃu߬go&hVJ|KFPSb<% >̼,SҰ3 1I~%YL34?J2>6){ENPfXTvK w_Mz{^$^;d%i~ nϙ8cYIfN"* LE͐ e[9u9Q]"\}뵍: ]ɖDld.f؝b%Nz< y ;Bihf]*ʁF;|w5䖱#,]k[\]*Jʜm>4F͍xSAՆIuwM8G1BdYAaQއMeu&2* Z+E釼p:wU?O/qA%ݷeWF=mkcV^JZ J@}x}9 Q%Zk?D2pVRw}Dۊ*|\ṫnA_PNK@xg!GQ}(9ł4W}Z^ s}'K/7X"Jb͞Is gq_f>w2zP:F(wHRu>O ;AW Ӄ5|z~x9_Pd^)Ioc|EA=?Yk!S[A M&O]-~-~ߙxvY;7K~'7іF2#| C ^]}qà)]I%8`|~͖%VE6=!OEQ WD[e)ς$Fν*7.̦i5=R´I'?$'3# 62\TѺz9UWh/ђSEӇ?&Q`g)hxL2$2OfN'KtݜXo(t:"+4 6JV ąsޕ5q#;SC3B>P*;}dMfj UEǎD_&D"[A W!˱vQkAGKKZDqx^U\-f,)]=[zNj=E60_ л9]]AJmEzq5d>|o}l.Y ^pw>~6)ڿ %CWF(Iim=*}9no,tҳS^q,>jiͿeA.XGvb@~aDQ]wz @i8pm0w?>ÄT>|7 (^CN[WE9vcdLF>No7m2hWS~Չ"b6nkǩN3|16V_樂m';/i~.^}r d(6'/aYL۪0|糸D`D 3d,ɴJyݞVR:)-gNRŸuVBH+)SiKl7 ]ĞNF0c^IIk7*NҦödot#:Ll&(jAIY ۂC{ӵT1@˹)J0Daa Kߑ iIHؖl2FqZ;=vdoˍ&S `MN0]3%;N9Q1|Se)gTwNn˒F] 8&h@6KN( # IB #\ԣw"K.`NlD3D ,g cBִC팪Nz HBLu X^RS, &6$JT ny/MFd;xhgzn>Gػvfs]RN P #WZ48!\ʭ`JKb֍9܎ Dr̘ROv;GGȯ_T q1lOΕ*+1x-J,B\: &ˏs.GA3(&B }/QZ(~ʛ5hcC^}ZY:';Vi6=~j41bOA;6w0>- /T<'t)㚹T՚:̝ czqҔ`Q|3~)=cQ+8IȿfTbK]s>v EtrD 9vhvBBE4C\e{$lT XAJϔ߀L |:WFf|wJ&Cid24D )b@`Yڟn;֙)}2_Ӛe6Q/1HB%be+{ WuZ&jNj|lޣu l, '9K|zpf&,Hi:taif5Gwq K@ju4RhB=e C} FpT}jA(Yg(톢@Z)OX)׀Nk `d0V?Z.Ihy%C.@μNɆ}k@5S*g+$!vNJ{/My SӀ ,Ǧ`DBi ΂Z̼]X{bH-l57oϣ}ltq HϜY.tilJK\'_p X(V,L;baR g!ñߘiV,Lj|,$D@=+I.[R1|8A&b2G8GV ~mKka2Q̠ ZK~[uOHX$KW2hDWp< QguWHX*wxZPR?w"kW g}Hh2q*8t<0=X,s/n_qXvJXvZXv#ȩ@O} Zbu¯q+JjB0=A3z#h98pT+]쓁ۓJ(=۔nL*I$LPHlׯ3BjB]: w܂1\)v%&`^ IG67wEҢwܘ{c2j UU>܄Mݡ ~wm4>}Q)s=rǨu`']+3v4,yE oJ^vC[dˀ ?[1˜hJ]T媸sj.ϥ覷EWjV!cՖ97!$WjծEiw^ j];F]؀d-r71mVApI֪dVTI;ۥIcFCȚk/J!.&%U:%ywc8M-z_ۓup?*ԏWqd?\Y' dFƽbC5(z2fB :]@T=ݻh!SZfWEXn^5q+IV B{*R e)( |`[YwOɟBvB>u 0d.rm1U%QvmATߖo GL$%Z]S: X97B ɥtO ןHbE9݃}JT2G䵔>b*+!Ir::&j)Fn l1=aQ{F&MjT!9!KJ n ؒrtaXa-%CQƹ #:+QFI)S/i]ʜD/PO\L{9/+U`r@8 dN JEܫxuGc$F 4Ecl/r""UG:6Tno&a?HsɠTy+:ߛPÀ ;[ZnrOQ WfXxY=\{e"J dqBe;gbj f)DO켈g)ӵŨ=Dd bDHN*\OD3Gf5S ARTݩ 8EEt.ыtc+OP>Iӊk{wxTjIK㧻5wf `U/LEh`"_XZfuCPaFMF?J xڽ^nO8ʉ?'ZZHծ҅jEV$>G{l>SnKh$brDKKG{~HM!^h9Ŧcozio͚囿zf>oQ|c"jKe})Ċ5TG A50, fѵ}Xs CZ5bn}= ~nPٴ1(qfk.͋r tAZa v rXuYv.VowKמ ̑ӽmSVw{R%kGb< sM[Or^OsyXCF J!pk6/\ NVoxfI 7Z3h|lj>U&#,r!=6( i$Ҝז7A>Q6xoiɐK`კvz0k#N>Lc"̮"mg"T\E"xR$.=-stpg+_+NJZڨ{6n;v7QUnF%MeXIQkU(&-KvOOԲWH`MiT j"qc ^W<*SKpgq~v <o7\"!.Q!"@iPj1p GF}gX *,e}v p}?  0Rs?X34 I` DLE SD@*̨ Z)$B"0'=hEPe 8d0G@a4I9 `.yx pPoYlM(>'>$BA#@FVR!x#4!UP>X5aV~"rY*( dz~8d ijuYSi.GԞ9~K @Eh-UIT8y*ki`yɀk& J\`U`f@n@2ť|sߛE%D-=~r_o^Ti)BJs=Wtbz ٤3'*xL_Uޢ^{dqE+CSiVIiw+~юH aGԡU2FdE#: B 9#/H~;?BzW6\[> Β52D+CKaqJA=nE} Bs}9{}xQnpދY\m! ]uːIS&N Pm^Gi$乣hsVhݹ ==sUB5&N¨uy#ạ47l!`tbk jA]S[ ־o"bDRҎE;M>tdXb>Ïkpy`TGYn )8RYpDUcS`yt* u1&d?\#VV |:Y{WsfsDP'=M *89[ʊ+@vyؼa}fBsnn峙0'4k`ږ)(b@LpAңnkp!dǂPѱ (W4ڹļpBloxٙd+=!We+=oWLLBӜ9pmΜ"Qa.!AsJ؍/:qv$`A&./=u8wv`Y{*F:H ƣl65X'}`"}K 92Z|6"kE` =B_REFͯU<6ohYye  - lDܱlRZF!},V?H>zS?fÊRZF߀5I;//3dȪ\ςSw #4c \wH}qPqrTٵs.RoͰT;8h҅SLF9C.9<|ftdQ` 4^SLp*}]l^rJM#BC'HeӡPl% %d Ȃȓ0 `p?%|Fwqn0!,kriWL`ȏdlЖ6Ӊl`Xa}?3<=[ן0/N$sm!Kq;`'ȑbQ-'}>~.3? е ƺwF${V GaxPD@\1Iu!'6k+Ͳav)&[;s[yvRjp^ =D:D0;O?"YҖ6NzBgn Qi0D9 <3#݄q$gU-*Ȓ?}?20祭r[b@Ul=+3ST{;RRlUxf\–=<3( bt9 \r޼?YQ0;3@( =,@h`CRڨ%vPckeFܛ& muߞ-!GL0bWdOlw:vsDX%׽2Mb,!hZF WZ[.!Q %|)S22I 8qx3JJ -s*"@F\. B"rcAZŊ ɤؽY$ o,j&X`9œÕEƏ\ΌF RG,hx073$gYj=2"c3æ Gպ!GpZ+ @;@ 셀, |A#-|ׅ6v&6TX DN y? f767iQ(Ṽ(0=t`S#O¶B)oͥ!noqu*; j[M:We.ړ\884o X-r[W:ʚמůܘ[Ż$K J F J+O5I00>TɀmGQb(Yޮ !_փIV CٓRaExz8DF1n}Ql' :D|EŢUUc:*ǎM+;5ʮRzi^xSkZeD:DMWq΂ m%n#5Z[u9Ʒ";flNrmC%m ;?y]=_s QR9޴jaqɥX0N㋡Vk{aճK҈&]ϔ 4hLhy5&2V=\H|Cx @ C}P(FŜN㧗ɈKޖ6aiXBZe/55/ÍHP@h o. 558XbBRUG*jz:V[T WQ@sw䷗=A<HlA䟝Y3qͨH|RIpяzt^|2}S/m4kN@/|=MtdI3?s9f2)׀^@ ̪0DH\D"f(3H\ 4VkhBB-쪨;qlhp0%+Q3;ewg;g&33ŽZ3$k\mgnNX~%2d<' HP>5sV|>AKD^AOs- izІ3ƃ Wb>},]gHB<@H0J QϞN'\l\b6gdf~${=]Ѕͨ=m jIIg~l3)˗3JAm8IZxl8ǃAj>ٷix3pqOHe u  f`zbfe"B?'ʞ;L}ɶq8:yph4n<o]{X)@x|9=UD8_p<ۯBxqd>״)mzdykKw*t<.g~xd8}~KfهT؃gJ8_AxFM}0B"XRccE)")  Mt7]($h4UeUUyYl7ݡ4~zwR>OM&'/b۹fϯު٥7wQzF~WpLE}~:.+fؤ vSyM :Z՛K/\ ]&=mV~Sɿ_ϯO?Boތz /08{_z3.zw>HyqA-XP' )]Wo`kۧa>AՏB+rs C/ ﹓P!0oLJ`q~n6S_|f GO:ŷ"P税PJ%x7%oE8ZvS6$0-hЬ 8ӭCRm&D/{.nCtޢ4Xct0]>^gl~\Ξ(VҬ[hLM8Knzz7׎`)eKL~N7K`>hˎmrt -حU:5KB]>gKy-ݲqKTdUb DMw>w(7wiGZlJyMpԪJFJ<,ڲ$]ydžwZѦ7X ކ\ CSԆ6G m v1wj}@ן vpk@qEV[[no;" lqzit>'oo3fܟxmSR`~٧; RǠĜ"O_:iї(TGh.Kk҅3K01ffXPqDYet\ +8(Yf%bװ[} 6舤֤IGc L DpOU=Y CrDs#h5,Qyڳ[}zluʏ2)XӬ-(EƕB#,sQ0‚ARv&;%kBE6+_-QT 7ij{MTүnxUN0YQ2Ofƨi|{vatTrEgn۽ LYS_.7>Sua% ({c5Kyُ"g<|&-r+e, Tu[R"CmL)3e-bFEHi'v:H-wA-&u;j[ނ9k`%ki,=i7T/'2u^>W{.ǼVc[$L+Ŷ:mx.fSK5#\sA,)#_$iX]M=_ZVBmXnw #J F{ gƴd֒JB.|CnX՘`"[*I}5)M*VCDn5?*OUKbmC C`>+sB !gteHM!8mqv͉)mܚt|#bZ3֖sv"YѾ|z" "?_]|.6vpZQMZ+KYraԭ{S ')%P󭨙KWBa4-U+޾U3nAy˜ZYP;"k3zzѳO::'㧏SJc yLZ{eK)5+%)1q2#%[~XsoAƖ¼i ac[և^^$mH{QNFe4Latm.w1d^aaO"OiJٝb[ե彩-Hl/R*\ ;s{N*nIV-~\tkt^+z}FT-E^z)"Z[\:rpi Ƹt>S)ouLǪ6d4g%M?j5YS`Mx+ #~i$ǒqOZIJlv_?@1lKkVzՠŬե4 (Ӓ\V~K`NP'7=rݐr 35A O3 !P0*BD:a`QсIbDքARpFHsH.=b:zE3eIf')2+beZ]NXl`TK2Mx.PD`&qDNE.l 3x>Hf>:hIô#9IuSg><€{O]̴ < 0gy6_ H,FB"/9̆Qi T1& jA+:@ :@Jxݠ6ڋfDjPkp ^ Fd0LLŐfhn,1[ƪBV케1@@.Y&x`ɤ20<bEg$ac?4%p Bl9)hRBIuIvɋP-ٷ’dyk鮓<0=1q ?C:ӊLH =gQk#D*zC#0 iq 8}D0%q$5c 4} rGHDyXPl"i !I`݉G)F & Cc吴{Dу}SJ`AVzY̳g3ʞy%E_ӗ&/Pt )[D$ER >U )pą$I LU!3~)+B! c U<;WgG$ J @˜mil.f3Y^/,c2 Q|:cj'3Z;ȱKv*rl`en3$irQJ~`63N{T&P2H(1 ZQƺeq6/k\.fuo<y2Ծ$fo@{梓 t_B7_8N?x|; ~/dZ{8IGe fLc/7xt?i?ϲ$-F|k=?: :x:rO J#(_>>BwA.ĥBD2PQlxɰ]U0E `!sfp="D 78HL$цZUrvz (#^7~ _ [b< <ĩ,`Q˙(2)$0sP$*Fz0R:ïïv~$ՁRtrg L X]v7ybYڰ 9.P灧JyЂdHq FO7Ln&چȻ;iB嵐kK7Ӟ ٩-C,(wDٙjredחsb{qw1{yͿ5\뇮ㅛON( hZNCr&:1ϳ|fOnM8g1Qkdl ;z0YYz g_^`*vMa g/[NnbV*d*9\F3RÏ}Y86K@gH.m>ձ@KT"*kPbQ)w8^PJ qQ[f^ +g㭰DFҕi rU5(Ay椤 aJC2S>ŃK@8 f.BOVV5RG~9~yCHboCNկk3E)Ǭ5Rz 6: IӞ2HA 8!ڄB#ȖÍ]xL$hO2@SJvKW_8%?FpEנ(8t4p ڼc@ۗ*f/fF?xW*! vui,] #q5j(ô;"M/R?ȑ~妱t `4NR 6ź݉i,]RbS Hlf}5$a>7~ф篒1TSP[JY{fsmyAg\P%EY;']h%zR{kw] )`Qׁ KADyͷ*-1x M. 9khYuȧ_\9Yc7D$r5D>9;=߸(M!j$:yt^@S1k g8Μx99yS{ Շ9HְARĀl}&V*q+D h=:Q,|ȳ=??/g?g7V;@uwp]ij;_¼$f54~s kmBV0/-YMays kmoV0/-YM^i'"J+tHsBSZU)wKaV!S(++ֽ6ҕJ*xkoGd3nݻ`pAci6ֽ+8 P9]ݐݍ#0 º_Gkf3nݻ]Ga|9C ]}`w7g#/ +lx\݆w;Jy͛lx'K]C.o+w9(A׼!Ta8Y|)ǶC<֜j LyuWn߬fB PQXk T0/-YMW;Z6H}DHjGw Gv||hqDHj^Bbti7{ C(k^;s2LZ7SHD Nv%nx<Y_c)8ZN!؝c'S͞.!i#mQǻGR=?U>UZ}Wkw7p$hu\4Q:!h<]g%4rQ5|],f!ejY_8[lPԁ. `W:m$2\h\zOI >Az $5?#!.Ͻ'zUpBArXs*Uة7I'-=*Yŵ!T)57xQf#%4$T1'7qEAdLX)VEgBRET/u$ʠXS6D ET^x )cF-x\a*j5,ZI+?YCT4KeX=eb0!Zwh)_ʁ~澞/r)I/a ;m=7k9 f+=RLpR\5{g&uh_]B2._1`LX?w?~w6byS\\Lgw=#'Pm(?SP$+p6nK:_J7̍$Cm t̀#>]ݢlJzRm 4[zl/wKM:,s13(S[p f1Uk-Jhª/pA8c_BqE f385RsVj}RjKv.uT9Êk*}:0tsOcs\jPԂS8x" A #ť Pc چiyj VI 6/=tL: MF`K P XO]<}{ҳ^&VT{QqѼQ2]~wK |\sƒӜ8Ui9;_p_0pڡ4=zR (*:X 0}r{;l-r4reh+"3@SAhՂPR)u `f$SBO<9δOS% v}j/FN/na4],nDpr/?fTf>yvŋ~>k ,..縸Aa/PesQx8ޱpNܼ}OڋD Q%@h 8#/DD4FpMR; ۏEeE\܇_ .ަ), , %jR,U`,:sĢi &`pqj*hЌQyOTF1A58@6xc2r(H*lM*eڱ8Λ6]5MQj%\2ִ@+$ ƣFLɈGD0VAy*Ɔm/k 2"ٸIJBypzpQ` ]XkEDCt:~a! 1rt2ijtmc}ґd!`(N(xRNQqq Vkƴp&l  ')*CD{48}SmQĄeSWNmd;['9B{/_U_S//JӷfB҂мOȯvk\OPW&756Cgf}L@BjΗ5 ٪}2GRٞ׌淯^W7s=ۋ) *?[\oc{«sqK~_oyPpܮF| 8.b4O?Y9w3|w YtLp/.|jy؄ҟ豭gB^%Jpb۰b~vo mm*GQAW鰥.YvnkFT)QGeERLR -+&%D]{QZ7%t0 :(y&Ey94{Q5QGh]Z-ЕwMؘ8(;(*aageoF馪tY0$a (J3,q)v@J!JU:j,P %KNTcb)iH{R eU@ZWڝP@[]òj0hOam*t[mPRP{umv.Vum9Q`6Y.\ߤCDS0YHw>YSʽL(޽y폯:&UҮ3Oa>_Yv0?_"NjB0YInN6*BN|]nCؕf|c'3Hjr-sr'pmXuAN$dIZ#2/w/=y*Hf֖[v]!^3{qD)c̺4f'o T_Ox6yPBG<<[]O /i?XѴ+lO]p6곒2tB?[JOdSRKOކݙT-jsh*]|oY+Ծyׄхyv# mğ-кl[V$r5cNHM!-E9<P ,+q<+w{_ 2Z ;b"2nxFyXAU0T@ M-e#rsz|Uz8KH+^_fTE ƙ/'jn΍ֆ:׉߮%ͧE>Էxl5ӝ"0Ja1|aF6.=b~Nm7o/zorΔK#M5D> {rGmG*Tq!e_J6++QY>x(d1MٯOxes(%Q 9:?=^D)c&^LVI|2Y*>MF'y'˄)e>@z8usMXF }%y1$S +o\uz-y ;&|G @2=NDž0,1LEg!7tw?y*Q倳eqqI!H=ۚ`޵6n$"l/7~'lIv%7hbˎ$d&?EIiY&)A [Y݃YZIK!O_+B`ieL_knIYk\h1 }pp.*0(E=mxFﵔ0'mݔ7=bSe77G{ \c?WK1>'6 1vi?F.6­ "P3 8P4TG'wY>Nxʽchi:\.5Tqrǥr7>8~q $^ qDd2p"8Xrpӵrp@ nKB=wgZאCFW.W"/抭-OiKR%& և)ibsNֈUţµެ4#eElxFRٞglqK2}$qe3ǘ]ZXqK 9my5ev>#7+4]kk>Vڧ!*M qtqu̟dFEۮ,|Ru1޸{jQ_-SHi"haf$RP mgvYʱ[N֝|uBxAYwޓ|`OqQhwfKnA^![,?FpS7ss%\Kj:1%f Q!1$隶9+خُ!'mqO} w-?omn$6%kŽBbr0~5Lj!7>-+MuUGuF2"[FB?ROIT7- @vђ{K#Gz5 JU'ALK35"HUڟO1D>t#C0eR@zE3nb*p&QZ)XJ#WBW}4Bda pY"XHt4ܮѐr)7[)*]es8d8l//;ѿza݆z}h|龝.&*zg—t'4 >#DgWSV}uBH277<#-O-~d'$NnƭGv[CiB~jGxdz;`ՈhôہtϧZxU5'D]i5\V\Kv~$RKd#%Z B:»أVF @2x:ď{&"}oX⫧ ߜ O9jkgYfr2PB*+d~=uīv}v9Dž}])I^h]eZzzZzMcl 'wmlrU Zf./S:ĬgQ4Dr ؞ܸinܴne]$<6/^w _f2J:7+ p$]󨿠g )щY/aDsc]VS:G5=VtjKۊS|/ kJC*QŠ覆_\݄G)L.Cz;C'fԘIj杙qX.LQιYV1*Le|lk>7Lhe\He*K?>:iC;3慙uh~ 9} {qy˥ M9 mxɎi8 H :EEAFw:< Hsͷ 0k!,KGž糉-\lJqQznb_+j&?ͩ"~kaN_Fzys _V Iv8Gl٠fu:Oi67 d ;}7ӿs꧛/\GҗSrpy"s @É&';Lo( re2&<"(jW#U;;ÎBQ iY<ӂ(FI5}Wb@ʍoijdC]d; "{Tw`U3=n&]nL\Bu}*;Ί|^y oج:O%׺Vyz鎪./6q>xy~Z3 SRʿ41 rAsfrB^ } ..]);,8cfcuЬeƉܠL) ; S+۹a1Rw"N|EiBD2#V@X0= s,\;g5}TP֥\Ծb#Ԓ6kzT+Pm==Eph%UR:3!K{jmσJ@4cS20WB'D)k3?OgHy殮f>;fUHPҍf69wȞuo0RןC t)>T\wgs?>5H]i۫M+P½?uiݲJ;>\q,E׿^?3VO`~S?A?! ^O&s} 0@-0z(~WccByǣ~na[rjPmQai=hنgZ)(Yi5ZrmC ȟ[.gk W.KBQQ_aڃjmj]rӮ>,]n7unmACwܦw[r^}4y|k~D_x} ?_G&R{+ijf<^co/nލ׬Ae`qG?>CS+;6_KJ@57ċAb>88#?b? uR|9oN図![)i[3Ј*EŸ 0#5#=5&QW2DmDf,⪧INRh]s4YwJjA)D&xG ,T2tx|5и_7gÓ4ȾQgp ]:^2{r=zr)q!zxJBTJ20,*F27&auciN9<"Pv4i,͇LM[Ժar&K : 5|P8eX0gyhY(mC3 !({ШVx<-%AM_p_z|@(jI>Xkb+g9!gNX:E# /H/>;^6u֜8aIH>*tq xN_2hC ZR%q;c'qeS,sFrUZ jD&T6r O,õ2 0唁 Z`16"㜴xO@}9Gk$ړܡa@ە6h0+ xm:+YΊuVZ'NuPC_t} S'>L3F}4}}*T`O/_>v@w]:IMȇW0zCK.\wS%JnC4Qz+B,*ңQ9<: "1 rjq kVn<W(-:TxR +<>w$+0Ork>wΪ@;&˄F!|{}&ȧ yr~Bx~y|#cIP&?M{)CPu#կWD^\\\0 ݨA='~.h9u5=E3hV,!_kcd5l>cAyB'UMg<;CaX(y^0W _a?eWïAw!B:2JgWUT&lQ Et>v_$iMmVES[r"|l4@~4>wf 6-PT5}t;V@"hQ-DsR b})TQE_@xgRj1u|Qv㎚JRɺ%]b]mXt ]3ڹ$t75?wdt^v2\$a]\\SiewZ)xֻ7/ݛe*MuoV(P`wgF+yJ1Rzw[z_ 0Ռw:\RN\yo;;33c#aEhC>bhQӤ)ZNmAV:1AZ Pn4g8y2'|Mg+@âP)"}-5Co)D ys゜ "Ge+6D E%c4.|aOA +r fe!\ 0Py9k8GEHFDa桇4@[eS'jHCc[K@Acrj}'>`h)|%Op%VŒOkڔ twzRj2oq6G XXc@QfFPLhO% IԺSUIUա1#:u$Jh$с(g`$`'V2!>d< <zS3p4ZOάA KdN3B&q5@X?Fv9 ٕÓotW_ر䫽LA;p`S]Z r1d@ V~HH S(R C+?ASZG(yo/=:STm2j]A,IaS BPW]Ձ9A AY3utPұ6" QÂ6ӸOQa<"3z|P2B]hc42 c`ic?=*C(0zA *,ms;]e5NPm5}h-tʔWƩop1l(gV`d&.v{]hMњNDkCkXrs.x!(d!, ovbV:M2 7Ǡ'b;o1e (1Ac+YxQJjU ՞;{1ԗg72 s9//M}IGVpk*s95q.^^#q'E8JQ\Fqf hz.R8-t-={XMsaaq+[foK#ۼXlALj1oWT+8эzujE#u xW4{ f<%@PEiiD9:iquARga[-PP ͢=x=DR5;wI\ߝD"AfHH| 1}WY8V \0mI0῞x=gyd,*z.?w'%*0v-$Ձֳo_qok4K*$Un!UOR' T#\C2UE>K^ɉ47(gk`F}'Xu]/3R\3c Ւ;%mZ3L6Gw >`<۠w^vkRr[?W{AN`ĵҘSJC8qgAt])ãr&E^\.u˪092$ \=u֔ zIJtk)}xP!R\h p *OE?a׼j~ rAQ5>]pǴ4IkvJ7MHw`m P9FR>ݪSH;٘Bڡ,:O՚sRaJk*N9MO9Mkisn2!B0IRr9tq\~ UXt9XҠ1~MF f監 6R:֏Ý)WYi%aHLV$_xР $P>FܓTlB/PJbQ4u|ٽsԓ5tajdO3q9] U 98`ȕX;+{Iqeyq5-VBFEic`((%@!7R 휽#ڵzv)fׄ1UJˍQΦ6( V6;m ^k v,Xk"2աVsɩngV?yv'Gn lKzdo8B<K~m^pT8[^ڐuW̡W Yxh*m?r0,Qyw4֢WҖd˽{hU=yٛU ei)媩Gׯi&;ۊQѽmF[wknV3枨C>LI뙥n iXΒ柜}>ǣ}v.,<؃޷w0s$ھ7ЂhwgD3hR* io<,ɺ=dnn)c/+3k@_p__54H+[ۑ FwG/ <~NU߽q~~o^yy,i/߼z{|Fe;<>??~Mm0)Cp1Y +_zo Ռ퀥;}[e.|} =N.˿~7/G&L0z{=-w㷀 \m5L.2^0O{[Ɓ[@7hp?51wq1 !g`ryI>_*4͟O b vuu*! aܫ(4Ѡ+BX(0j \ٸꐸXDl0  w*$*pqwRv nBe{:ب&I Dp,8 jEAOD4j%q][֢m @2y664RQ/2H0k=l>քL9HQp-k=_-vNȔ6(~l5wUw6ОXDМoab2T*QTL '|U) Dex`.I*?*K%L*`ϲ-V2Ѿ{)ľg3k4u(zw'sEUue Z 2d 9&4?u"6#1RT/LAV>"Pk}l'k,MSXLҬa&=u0ƶz^ "FkrqRMkVBwTLvsޝlRʊhszlJeHi_$f4;x" FDd2 gѰ%d8gtPKDk3yO z1kbOO9WFҨKW ڌ)&3'y5֡~)غڸ[ G?AIv^ JF4}!Ud;@5ݑ#z~O;@z]:eN.|n^+()G' XM-|TT]P&521௔To;9)'bupd! 2ؑ x|6e6u ?!G$y]ݼx(GkA'7ًA(ҕqNV9S-Tc lW.OpA.hE`mj$C ,9<^l${.ԍ]Q\JzWyƲՒ݅98dȵovf;>@X1euw ݼ7rN 2?X]:XNt>-tk儌R,s Xe]M &+҈֞3u.?gƇśbͱƧB9CuzD(2'z1 ̣_8M2"Ҳ;25>2cu1gE_ϡV0t:4E׫Ky "rnu*K#ZWMg-vI)#d&^~jhAUTFG-i;BTVJsY2" KQ3> Q[Hda㊶}14tmȢ>g+KRQ̎ k \ގ5/ Gp(G(!unyCG uŦbGk?EQduE6[.o+QQq#c',6qjO P( { $IR,t1P7,ϳɹTL FxP|ۏDJPr= @'+p. aɐgiݍ&F9]rf W8|aCK. =X\I)@OwE-㳔:sƞ=ƙ0NV:t sD95l.Gnk}nVFr=ǔ~h&r'ө[gN YREɉ~$-S~e[<-B  tĹ!ɸ8㏲|49'0(e ̠JMB'Y:T؛/4LGhmyv$0A2 <•_Љ28CY/*&s,jMQ< g $I2:\ +8⎦&2+YRA zn11^:2F"l;LM'V?>lCwab'-h D2PQ/ SwWUѭc3}d&ۃnWގ]o"]={ &i Fo* {R4M >X5!އlλVJj^/5V5ojN[m֘C.O>x!38IG {(iBx8!7yV,Gd ׽O|'s 3L~6Jviy5FٰeUTkwY"aRmy?/~nH#\ugъ줨ʹVfxI(fk3!݆/`/ooЄG;^UO_4Yi)z~RG_lLkqXqh+fa>0\YSC<(y_˿elM;Ѵis6ݽ^oNäx˂ԉBpv%u9b|=ܖbAAK,*pF{bM @߃P]b$(#k}w8H4j[lrṚE\$zc{^ <ě*Rs~p)(my%bS!V [ b &6KЮ9w8'GF5r=Jb^ku̞ j RΤ'J*a!H OCT[mLEkc5!A8@RktRm։zD@[ZSrQ*u!2uks۱p?n>Bqڥ5bC<ԐFigx1yr{bI} nhݼ6<0a* *zD PgB=ǥaD=Ǧ#?XGËg|X0qb)T;J>º045Ƭ:BD =^ lP#e k ,~YsߵZd L' -'HB$cJ8\1:x"^n ت6˧`T n㷵+ɱk-T1usfIūx~0,Vj'Eo%``I̳7Y*PoVUhl>*l1e?LQ]qjMja.ϧ +!]Mة[}&? 0P]HI5e$)*gO/ϷbNw ɫ_h8ow|~]z,ǟm _mP[Sp=gdއߴ؃>tL\?hj?Xth*ц|څhWWQ:7Ίv4c7(ކa6, TX컸n_jh 2w)u<Di?9Wpt# z04Rqg7ξ 7*/~ Goe|Jul0}äzfЫh6w/y űS??&o[ou?5<Ϊ߾ʶPҎ T%(ı\9{I|8 qyi 5^^(بڌǂ+=H9jM1lr-aM%XDo?Zls)w0aA0<|zfF(f\0¤ܫT<AN^-Uh8gk3EDK6t-,]3 /`GKrҴ3z}[Ql.Q3V ײ>oV;sܐNCOi볳nA\ Rf0̦лK!'ã` }J*s{o><d4~3=A95 K Nc>!dSr\UՖ/uQN;';O$N1؊Hx/?/iHAW#Rb< 0GjMQJUS.5;I?V1g' hXR㈏ J)=o2fC_rVL>y35YaOlfM~dL@q Y"0?a̿'gއދЛ[e2XwM~^0/~rk9$S& a(NE%A[ٜZU]@n`YZ7\=,-.f`Gݦ:  AioXhWe \ӂ΋la-lY}?ta܎41ǂO++XL@ BA{N!B~ ^x;C`*',J .!=ڑQq 0 1u jzo TSo`zuazq^tVAQ$ڮ\4gʚ7_a'B`}L<8Yk²b3H $ Apė>B՗YYYyX;RG q^끽5ІiZYh%0?o!)>[`F;FB8&b{=LyP hG4m܎{AryӌWo]bȀfKU5S@vv3Gc$(՝'#hr nZlZݛ\6pF::rq g?moדt;([!ަs\ܤNPhX,sߡ3 g'VTg7!4dA Y@8;s~z=^ y <"; ؏N\P`ǂyf2h8y4T!_ yag"(^i\6Wb6KSnSEnY8>waW?z[a0J!jt(lCQ=rvw9I&׳oy+IŸ'W.Ŏ[^/W~kEN#+eA:t^Q:b &\?hH,ˮ,?7=$kXx4"O4$&>Ci_>{ՃlqE?WNj8TܰkITܑVd~Iѳ;sz's ĝn\4¡hk8NL& Iyoq+3b%30eE&sf9QfID?`ej 2%bkG/tOCcv6:Jʘ#m5R3ׯ,@rĩ Y4$T[Mw?v;M9͐B*Ӽ)"O1[b19ppgAyXz-V!r,b)lggnXXy/IEժeB4'a6mu\}9FImc}>*۔v z›lՆ |"5͝+"%Ö:8/ރY.gC٩-2Wz% z vg0<Ц"~m![[Vl vBӻa}neDI|(ϾVxӕLDR>1D ОuxӅ0Kw1oG8J<`piRT$||)B#:3lN(U)G-x]ͱdŮh$X/2iĴ<]'t< mRp Oˮs3r)-8\J 㳭đyL2y2`P1#$%֊!9#V[ ʐSlά!rrՠ3M3P_#ԋq}IcZ8@T) ,[/=~DN%BJRVʌC8'Ùj("Ո;΍uS-SAq!i w שP\!kL ĘbaHf Dd XHX 6EfQ9^Qp߄&7\Qd*JlJoVƙԊTcP Wo2 Z5lY\r0mY|[|}1--ć mypJ &)$xb8^b2' {Ni +Uotaf6UWߺjD7%D "1$3 i@Ō*0HxIhk5y 6)YRΙ,m&uRb1.vXW;J,6 r.e>6Dm ܒuVT^ f\c| R ([L Cuubm󥽡Tr|H\n֭.!k~󮰼K'p?_b}uzV۟rD(x>c!X:Pl*j)EUA\kB’$  JT% n!GnP8dn #$C|鏬-5IÔFZ(5Yz"8kɂ8`(\CjQVMYȚA 㒴` u;dkGiQ0z)=ѠR5(‘:J3b7f 5rI!pHAmBnyz&C+R+~#`gdݠ``Ux$g+8!L̓u!Hl}=pi>.fϴ4V+BΥg&5Nfƃ-r*SxaA[ oF#BgϨlvƪ Xtsl{ɵv596Sy*{6`Fۀ'Y/W\ncp=;%n57N5Jo]Z(~֊4]6{/_jOz2aTDÆLdv-kj b*Y=RY:wj;bO0AҒߢ6;=T(&osFEl{.I{g'8CNKca,ډ(οmI"XUG;"%6Mchc%u?> P{)]#] 34<`lߧP.+ɻ.~''C3BTpjC*̄lroԾ(1CΘ ۇCsf8! (=7 KBwneriQc+믔.%94?]lZvɲ!nj79B|lIq݈ m݈ ]k?Yȃ蠐xG\*MgnN|~bp}~ {#Gޝzdzz>" BoD+]^CRTJ:EYs$ I2 ̐]ǀMW Nuj1Ev܁S p.-;~5}%ۯ 2D}G1apG?j4n\*K~#UTкj{C#设*ܓm*'Ϥ,:?8)%GGB [[ bX;Hݻr5tk)ݺ?8)E)r_.vbq4]΄˙N5aoENWM40o&z{[ /g:j$S92e'%k5s{*E (HCgN@wB-I-1E8-M1f Ӛ`ۡ6&J=/R{5pYvC:!^W'[nء^Q5 D S(Wyv֭ϓ"d̶/~xvbyjfXt;ZE6yuBRxܧTq,5M⼿vPUϝ xծ >mM$c/VIeGs;S 0$i7Mfz `T ,;=yJ(88fs{? y1S.HrA5i1ctDiaq֥[Fܥ1b),#x?m;1f%/9ˑ\o ;jl_H)`ZY9,Bş .ÜbexNƌNRDS5seW2%hcF Pz8B2 @˱C`ɨdG\^FmgLR*u;I>?c Q:kfEwδPkU$"Ύw|bT)ؚ"%ZOliB0PjS{|gjuj\wujûBW/8tRհpT괐F BӃ~]hBQLӞ\<78 |/rE22ZA I~IȈ PV CW`/pt4xbg ŵw 9T_ <''/NVtѬ7tE]-.{P}%Ҡ!)ʭTo[j\6&W:H6W4:Ȭ&W48it)t^bZ0N(PMHvU4+(*匔k XWW{t X⊏>p]- bf'r>׈p'PJ:Eӛ枈"cAgҕ<"qPGp:Te۵8z` kAmz+t \OqY&*52ᨪCj 'RrOO[rNg061E{4F3fwwVFۭ8Egta~ۨ:MÓJA& @cJ&iejiFy4tIzHLXP`*(k`J{D0LPxf;A)bmxK|;) Cvȋ4,:)Eu-hf }S|0U{6-ӚT[42 9̤xYO{mmF_z[:Cv;io2M!0ѻ'NN= PP$\KbK$pp6 8XWjH w}k5+> Kk%!ObacNqk-&Qb%cAA ,)3Z S2p ! !ݓf_ݟoat냃@oo:]m);Ow Fi"HI̔4ƊaSG -aTP N1qZ[!E+]K%<1I'y7fPse#]eO"DИӥV;&KYkXИ1ESIhأ2x9lGvIWpb@}L#ũ/'I0W芷Etk.b45AS0"3q!I0(XƩuTB=NyJ&'BH{ nvAQ:L8 pw]YVo *l  XBZ4dn(F3s2'y7/}wgx--N.N^ R^ZnvspKΞYrA|=I;E@#޽6=Vbt> u_^O߾" rgߒۛ]r}Pc\߽;K"dw@y ^(O</;WaB؅D^ДN??ND%1BG_˴͜4EFtƃ.եrx&}Dz7gD=ݜHdJJVsTGI.Tk\HZ8|xPSx:hxڊHCstr!$~hSRc 䵃ʡ%ۼnR2qn4pu:#e hyCmvн\;1\?r_zxVH]ãMfN55 \\_=+yedaN(@CYO 56LQTju@V!Sw*i6mV?ݚ΢xJӓ35ϟF%TrpjKքgy⻢r'eBxvsCg8z@;JY Q#cnIJ#4҂(bcCՊ;N6{2«uO) />,ti!#4xJ'egS!Mc&5"Xkui[0*ډ|g J6x3`2ԀQ &[ 41Zpd` &I?F zK@ׂ,ZmխAM&ULm+e|ݨ~h؇W~(mm`zËuh1v:ÀLF,v"F!8u6IA! 72$ 6B)֖s Rڑ lk9sJ3~!!N&dσ}(AZ3>7NBD!M(L;\/D]N$cGT*! 򻥦(wbNp#OU1 ?dJჄ?7ƭJ2J~i!(ih[mϮpM(~.`xjzO UԓdՙҸ%86"^cr YؤNP$4c-9Fl`ț# }BDKBqՠD.F**Hhq#'@hurH}̙eՎeEyD+&O*eAo9'6`}^Ǯ;w~>9}Db[ ;cjf~W&ERʆpnB8.wi_AR9r4 V_75 }e^u[ߐ2*4xQ _s4FKB$cJ8\1!eI17!a|Ch3윭6avؙ`AC=Shr/' JPhrl2{d/#\~܈x)б:= Bm\BH3O0fM}Λ&k1/ȣ`[;a8Rǒ0{y./Gf_+ ֬9̼)"yIZ}Rw8E:'s & zZjNBK]5hUJ7s^eќzO/Oq"\ o5é)D(EaaBnIᮃ_펷xa-P41Xo;DR[!LB9匈 ?: x`- 6^?U%%f M릊֬}姞=?“u('jrMZ ~z\ &ΰ4Ee޼O|o"[d4HKܺQvGwĕ:MI"c>5@5%h/ xljM#k ,'O5j.ּ-&fuiCAgە䘏oJZ>&j\ۑqPs=H*is7J]?o~ˎ{4]ik(Fkh,d~ڙi獌Yp5()NF3*58&"V:iRR̭`02LAh{7ƼլQ7/}['eKj┐D7AԺ99~#l"(I:"&4qĴekckƖ`!t Ti bL|rUU6ع.J(çM֍̄ q"yv2bȢHiG\ƉB8N1In#'i Ϟ;JݍcWɆh9ِJ8 {.NnU?{k)76TZ4̶33 3Վc_h۩ 5dAH]YCver9?Us!aǬjw U_(fF{=m='m"4GTʄl;e69ɷdY7fK ȴU!@\Ԏh=WWYd_9?2XO߼9}2AA1ZcE9s#VA41w>NaXSA-ik!Y+-@j\v# o悿N':Dd(wpWBhbƙ 84VV)"G8%RBN"Z";K q}ao~fP7-CRhM{Mӓ:L OꠇKr0*mBQ!K `[U0En}UD=Uï/S|wM'n4-Bpms;kiMUV l͍L1XD,"iĘQLRiJfVF"PJL(Ahb` 8msN .UjKai\ :@;*gʓ[S ۷E7٣3NB^\r*@NtL`Z E3^Q"GΞSFC?jT{WTk}dL{ndx-ZqlIOSp;O猹$okk} Y_3эK kC^8Rձϵ[]N9hڊ@S?V{ڭ y,3Oq.q))7_n LEud" oP` L/éXp]tWD|w^\]uCg8z샐;} jEK]XzP\dnk,ԍ?kܷN7.Dy]uwD4W"CܴQ).b0Dߘ|啱)rBH b#kEBŖhL KCHcai-I\ڱH&eVS GH9JaKFŔ"d*V)\6J'+D@S7HШqGă٭[}X'Yێ#F_ QlML\BNHf%gq@ MaX9: RR(5Rt;JA> ||c7HލGIyQ|] ft>)*[{go%ߩA9jֻ{w_H0ln܉qQzWhnaKkyve=HQ {_ (}p :GRuQQ iw"u q5{k.t%_Q-TS{9GuUΎ9.l]jw)R| )ZdRPt}T<oRWc@.Ij_s]X6FG1JkВz3I?F!y6Sb0 cF%m0)Ҍo$Cmu.w7_}8d7ⲕ(c#S$e{6(\ZR(ף|\,+6|f Ҧ <$ ^%9@XqiRQch}v?TuiqDtDoU[K'MI;2~&m[N*H[$?][6+y rbY4uN898^`H3cO$53n% G#*uaG3 JP"gC#Zʈ٪hvw]gb3 ^ؑ#82L L̖@'FhڠRV)4rx:%(< z(h Zr|[guzVfb?ZfB4 H ̀ٸځt$(IGu޻LX2J!VUAdQqN6ꔫJ - z~|uޗ ʲҋK.+z^> Y=:$buP(/UіrpV󢔨eeCeO46|Vu@*BjDEyE~*5^ Y9 ?L O!4B ꑶlu5IxKQt Y/ʇ+F N5IqKIhG?[rDy[XP<.-(+k9ptwY:뻦-X :/hgJ J2lE G&2-Yx)>D^~|@C8`éa}{{H+@F-tP:_w<PǤ8YPP5sNʘ\zr\"/ TL8U^HeaÎIq=>!QiS6I< hX'xCp@‹y 1% `u/UB)=DXM<΍?ۙ*lI_ٙp Ad>r岫`˄(UV>&=;6ظ_c;2j g-x<  {xȣĻSr"9eSZ˔$(aۭ>/p~Q7X[FV^!^X=$f5=#vxQ8'tSxV<i{τ^fqk8cDP~mg "߉y/.dY GMֶJE7:^tn݇zҁCZh{zt'c|Ne-GsAvIمP418b!0k-f勂GuG8nԂ^n7X P3 ove$?>d͂)= طqn؅N l&e?wYwgeĴ|H C[_o͈*ż_'L:5ٵ=\B[JpL0Tvqk?3"ezj>ͧ4] o:L 8-8c^θDWIJmx?V&OASp FqF-|6vUR<*gvF^^t_G_/UtRf"'^l?^eJJ w՗x"C-8|FY:k*RU1/nFUsZe=&j0YHR{4YK%68tD}*0[UCP6B1Z Zw=X f&x @%HZPٛ y?[{Α3@"0x'>s{H ©5SV*c^Z +sPUvq}OmCI9̙;2'a5Hgz:Gܩ!dlCaso͡f (&'\If̹;3E 3 3^/S 96 R ԭ[‹>m9ARЧۈIHa¢SB sԯf󗡘2;ȇdDИCdz4 I՛+Z\M?H̫9 %igOjPH5kPO?KX=尼쫿 ޛ9_׽GPn^J5*WB 1Kku8 (E l »!{| C#g]Ţ?-[6գzl|B)s@U`;A4 (OFYfܧ3ajM @^8YٰU\HUqK ́l҇ϕ˹ҥT#F?0^]53:Qٝp jMwYUY2,gQ2-x91֕'6nȘWoV_-)aa*~īx]&3-8Vi4IQ䐡)$ʚ =dg_$k"\ơ`k6s߇/U=(H3cˏYIqe HE0k*콼Vs~;7B&&ap*9`&C8Qac8+*%["It`9 no :z2ɪLDri'+Q4R08'aq2Eڧ@MF^ UNVЦy+gŜJY$PIlKiṂY)h0W.&nRDN0ڲPIIħU&%y$Neه`kMI 'Š۝41Fy(DÞDPBoB(W>#k4;2Eōr@A$cڵ߃%$6KGq(Qur`ܿ$1@dnGۉA&)HTg5i$gι@o_Zr~A-4@2>*zgXZlUBR788b޾j҉]phkܟzsە%bE7) RE9gx&\>5K8plr岫TW )ﯗczw=A>TuQ<uVSKV6{g+L_k-p!D#c bl Xc'fP@ڃT].mo/cA=2P(<|QeFKP8=BYZ%Jjol8\ fMU6;(0O7kF>@Lbjv^Zq-u諭5'*RaT7Ħ:,L&V~):jj;Kq|qG ppԜ-0xDmoV!-)#p[CgwZYM X , K1,}9f-RM XK89k'+/d[&ެv%c:Ďau嫶$qV8 ׌uWNgbFxTXGzsPc{7]_=SGZ)vW0ՙ^Qh?H:]'v]wPpo2ixmk] eR:s`yG 1xH!2Rꨚ=N&> -1 BmZx{zH;;xzىFpXIYoCUigTوd)G;g* GZk;V"|D9_c"YGj|`XqZ5Κ'QyLf[12# `=o UJ|4ހ4"تJH,+WL9A]@V 'g"v!XI]D0);( d1#wCu@ PMhOҵjmBρE]2B_.kWw "믋DKTR쳭^<qv rUoL{ì/j~VRFApT5JTio-}}HU" 7AZ5%K ܄# b]t{׿|v_tb>/7+g.}&{Av`8%ߜ{}KRጬ@f֡ %DCAWkozYldׯseXv!mUzXx_J+|.u9J"w€ +]ϷzDA~[pR&4(K \T^UuVUggRl@}Vu™ _|ۈLDl{w5m'?w/b!ݫoo_[+ 賀$gyM E,|NΙex;_bۺs%co }-9'TA6т,lUʵ+ #eڭPgw0 S)p&>xTݞORԷUeU7vp%yM盰=?>~c x2YR k~%^Im_[mwݷ JbĒ .YԨ{0ePtdQ e^f3eޕ$Be1;[Rއ? bg` JZ>7T,,b7 b2Ȉ8Srlx앛S%o'e.OaHAwÒG*GxkϹ'IdDZ:+zrsc"|sOHJ\\Ǘ>dnvmO?QjOˊdʂ@ud*NLw_}l!wfnn}.5͹'_>^~m4$nϱFg:y&~DףC VlLc6s4 ;xŘjVy9ڈ[/srMXYJ?[Dȍ+ɘVR$H07_r^&n7nąܼ̇ϲ\a"i.ڗTokM ᪧe+"Gw)fH KrFWyb\.)=`lUƌP6VqμXĐ-^3Ihδ +0C`dPS"Kk:QC&D {N7XHLdX&!]ۍcuqn7vtLd2(7Ժ3i$<] N$ ĩSWWӉ,Tsi{.;7͊͢͢͢z6%9hP`3.`ϝ q|#)t` V*HikV-i=cQ03n)+ .ԛ `8l%,mL3뤵)34Zb,8eX O4&`r!^"D4"X=,U呥SӀi3S 2Ƚ!JpG`Z%rRBckdaJiABXI4& |r[ujوhOhBp]L:^,Lܯ_ew2x ~~6=/Ϛ[5 \=SDl͇D`軻ADddboF_9}-E<|B_ܛ_<.e2mͥGi&zݟ1QRI8U_,}qC၈ .C#$*!K-=B,\)Հߏz,2*-8ny dPu-{qR[P*e+{ n ؎`KSlrD8x0F#B7$! 4ٵ9fXΓuRZxzx~qx4j-~]!gdJ \, e1W(BY1WTË{bnorDaׁ(M18Enчw Lnh#7l] wEi b# ᔠ\ sjdլRҼ8RkOOעZ]eMxڷ@rbs_ց_Ukn>Y+%cr2lSs@%U;2rA `R*`u"hP7C#ѕ\>SEy&QF-~X.8)!l5V" MFSyJnJP/.1i^ޚaja]R.{)_. ӨȚwr9yuLm:&S5N{tQ+i]~P-GT!i/[s[< ﺆwL?<<ٞ&||kp1ھ.O<\6OLtTq9trs7y-;s p;a*{j]|z}vQ3aQvn@Fd4y-jw ~φh`X 6U^a sBMJfδ3b0&i̛y0#h6xJcc8/bCvӳ;)R Gڝ¡Z2q0<ʓVb4JQs C:XWi5#3eZEe c(9 O ,;b4jA@8R|TS}8tsDO7z X5RvXabw̕&uU/e%uI=k/Q9B"U+ Xsd1A`>>& & ui]˜H,X%aK'g,'6QАǵ/1^ PǜT;),|>)A('+!fN$O( "L2$&ZIOA`cj=1AbGjRL]Uaѵ~?KN$S_"&lB!da!`x<-u`bňvy/id'.ZN}4Z}q p.v? ARHNV)tN`3Ws z ?E2V!&пu0n><@\ ВVamiu d҄!/,!ŃWy@ӓk Jh)@ȼ,GH+ uX0BA*pMsͶueE/lBE##ofӻX7+=z1Z6" p F; ^"w'HN&p-5j3^b)⅏ 8n{`B)V dDnI %ԡ\'OrIK~ոhVo=TT>kc2S"'/䰴C5csMpA?ŝbtNh>쐱!u`-ͼ KQ_՗u}hb`}὚m*\%b&^h6Jgȅ˜ob >_/KR0.xw( Ś72 .3K4x|wKb .5BH E(;f`_ỷ.s77!e=0i+AH˂PxMiwӻxDy:o& i[lÏրc S$bź]k+ Ć*Tvg0Q*FB Ć_c~-gu[JƤ,,$QXeZs E=VfX=v &}_~3$%8Xt+?1CM$US_>| T"'V`E7e"{R4Fop O6+%nGW Z8Û\Ѓ{ +f..ֶnm )bU~k5NOc%xkO/q']SW|ySiO`YTWuRJFs̚P+Y.[5U#UQ"i vCJ2509_"v783ضhceFESXC7\ÖUN )3lmѪ(,Ɂ]YS9x*gZ}sݚħ9~=*&1iW{&??Ixxi[0 ꈂ V^=lU1M~]D{W[H;\{Demi$ےWun[M[Ej!ɸbu>ܭNd`mВISE/h[t-ru=|HbtKr;Gjfjl%fY2(C {=/֊-LDSsˮUOg| cR:T>&ٱ6aum"D ҸQzO끣l N=-K>jwk}z!8ڋwy[˓}78ɽջo'}wG\q=$װ˨?7Zⁱp_,8ʡ;6(yBs;WG&%TG&+pMP!eJ$aTV /1A"р 01k;YxdM@ȒH"$B]uޫI?NbB.wTrwn=(bXY.]!7mmV:aK45M\嚖嘾Юo*)ԇy,;b+E_'vu8tw!5Í0!#Xk,l!O;A2PWa޹%,]^78/m((|%C랆ve:N_6UeobBj@֎kcL2$t9ƈ+γL)Jn=Pd4,O!0Z䥬kzuN ʐbǬ7NCI&G@Q 9K-V.(hUcM5j{WZ}HZ] ID}$b⊑&\D6:Ejؚds`1"խ@@|ȏx oȧUz:g{o0*7V' Bztmk%e;ܙryqiׅKU~_2{rАg+_^?ᔋY ވ7/ׯWIEP> /p9~bۇW^'ӫp8}:ӷ{93t߮:4 _/G#,)ikTN$)h}smӇ֐b(1k`x~FxEH ǰJEc *3j-ٹc-)r2qPIE?h@Z;'I KTFj -|_ļ[8WEc0ZӋorS\Mԣ DE@#Z>*#nXMo/^DGv.mP ޡ >;Q䀼H ָAү:c u*Y+5Zo:mh >kzZu}!']9>/,L&=tc@ 2-b \åka[u33Jp0JGS:bET~2YDBJHUd[J+`Yh*ҁ≦Q*MV6zuE-be3-Dv2w .ג\rĉ@V*\anbt\-iFE^ I2'^~zaBwa'$]JYk0U^lM]xWW.g<*oGy\`Z[ hF|u-+D:7o8x݃|2omv=:j;?9X;v̒>+xY2,,QP>y3^!VBO$L]вf>qiȻϱk[ѮTe]TEcNP61R]|ܸY]'cE{z+9m iPf,w6jAwWGj&ǘAR]Ǹ|0p= [݃9WnQ#+&qUR7WMZ?oEu6c8rh M}`fa-Y|}CW/_\:[0{PqoX16t--.LTR#QGIot(I(TFҿW1VԺp.p,$`A 1;:Y%mu)XԦhAȎ!AI.x2x )*2f%~EcqוXfd=| BFhr7 gjϯ=l__$;VP:ZOvPTlU1LwPy7f=+JAj8E-`g.$R|3TB)e^y/Ɠ'Lxz[l o Ǚh]^|,4FؕGJ|5΄^ Qz_^?2L0~#^D=8 Zu3ZLë~/Ɠ']ѧ>}˫O=C7j7=W`z9UaHn%% mM5 RHߨ Nkp'$tJ 7V( VhtsaAaZG% 3"? @S1FЖkNd4Xb/;J[ 2X,EP+EӸjcFwC3-dKf6ep:JԸ(A kdMrv6 d %kVJD ZݲҔFqK+r]&FZ Rr\ Xخ]PiU4/lJ `;QxkÛk\aJ.H֕e-BJΒVp#ui-cUZHd)9ĮTr$;ln])wF.Z=,h]Hw]ڟLK'>]]#kFofn[R堫BâlVnb8/^Tz /f)x1 1$ҳc>+< /QwX2G$(ndsrAk rbf(icވLNì%&4$I',&NcRy)pL9 .MB`"sy$& .L)]n'X6) I;9X>*:+Ay|v Vҝ˧6BOYӬ.Z^ϕd-ccbnV(GJC2x,i%tt1Ĵ-  q RvȾZ8CodAdp%cfYQ a"#Nkuu!\VuhwmL6,}9],7̶k i%4v!5RÄJ,T+ϔ4xBɒ/Vi‰rr/^Jo!9GmH/Z:Ed+341H\͆v.wXtwmhS9xL>tәL|i{d#is:wA6%Hib"g,K?85L)4]h<0"PMcXԠ qE۴w)F[ B |/ᖼ; MT6QAUP"@m̍n7 Ag]<:Ȋvkk*["`DjyW!glx3<@-Vuv%rexϽ;oG?fj2{zoG0I3{xD%?w-fR.qIcie/g#NCZCeڟլנԭr1{^l7s>M`o#2nCGtv! =mGzw÷lB o&ّ7;4bhNJ?]Xv՞i:[-< G)I4>>>J2݄r@D y?vfݼÌ"IZ ؼ,PBQf23 _"\+RmH kT[%JzYN jb88/ u'1:aA4R* FYOzO1g6唥 J Q4UǴŚ!TFI#k˧IJLYϑ\J+~KC[m )U-.gYoR8=!$@()6)*! S3V.dTDJqW$kSIO*E`3MYB1B$Od)tLH+ fqIUIN -"RdV#B_ e$ H0."n `IFTDlkGX1jĕW?jT4:Wt! :iPQ .4< y"fz-@7o`_[/.CשѠ}%(Sۦ~au bF($`B8*@Bc\\AͤMxv nj [DVJos\&wwuUсLai\űЭ,DFc] yCj?%ncoa%:ES (CI'R|_/ix p 2p46*[q`2 ? A W1dȕꎕܱN,DCv.l o{  Cgs3m*|8k;3ij *Xs+ĭVxjzjI!90i=vzzwi=5ݿ {3D{by7"*!QQaH v|w=4OH*!0485FPo B/g ݵw9zx솽lΣ>m)hX+zMyy }OH尦>3GtO"-(-v<k󭚋H=LZ P$SDjޯPJktz[{m)3Z Î J,SW &DJFqA@ K`RMPNУ$D0j2a dIi⹱(OmzcN@NcBTPl5"rsB&e:̙c)' w"E\ -ϙxFTS&fҗ,FcjTV,:X+U>TVqMZ .i:Gh=XV!>F*X>8-i%\xku[H#SfDUލͣ[blbp\L(qʕU4YBai b]) ɔ+BF,U+f^чyZ5Rͭik)Sd !&k72[wzja*U QsÈ>]~!hxQa" >n9iݷi{˾JnklT1hȕPVL]6R]xLWP)/4}!anal{Z -#r|&Vp/4Hm>1=! Ni`IWiLV MD t_ǟ OZ %!E!ӽ m,ב(m㫎?]bf!!0j#%cd[]\tէw}_ޘZ2K3[B-o6gNZkOZFcl01aX!ݹw֒_?Mf7ѣDHEhM#(Haق@jVZ g2D#Gw l6:12ޣ͈( QG_K:E1J"f/SG7P>8Qw٩f/G81K7#w}W!9rMlolHmw@C/>%DbO!8|a#; т =#oG%ȂjiaA%/=JzvL'Kv"8n@Hã &tS!U|MS,@ ԶnkcyٵP;EX >ygV%承5?)٫q{%atbqDOPզ*)ޢg uя=))捫FgDڧNk`]g9/df9$SX+U]u:Cᨭ5Սt*^w9xE";6aoa6MINlkE!_l)\bRA8:E _Ȫ"EАgQ:mu#ZX BT':@pҵunmh3W(RFJ/|3BfR /kB{31JKsI٫b>|\ڴmR9zlc CXX45mZ!8yq78G?!]RoH3 K]Zkr |boaW5.hc?A4(/~i ŁT6 \MUV*T6d[^[$[$&l 4L^8Zж엖ΌBR[#[ТD ." }2'[ƺ&7Bo9}`; i$V1[U! ԰/@8j6C#_?\C8qwK՟燰K6y&/}e1lrXcʅɈU'1+3L$Esʕ)$kjOI!֘l/?yzkgG˹6jbz=Zs2ĶXךomy9&it~(-L,i~z7 vYWϾ{E{3`H'ZYϾ4GþrAre.!~Dk\ȋ!|W;yey(<8琰 ٫ k8ɰEpIJYBNB%)+OlCQ+Ol[!UWc'e%""$)̤Ebx(ʜ')EEeH29N%LRvL| |œۙ{ԗlaG ]`޶Q>01sZtl +$REd(:#Nʯ@.hdrY0s/&˷ԽA1HF[TyF~Z6A@d-.Fg8¢EMB6g!QBISiTQE$ZILJ ,a(ƂUr`ÅȧH sѶrxU g0wRQ'ߋ2HAœ!F HX峒38Fll x)9ߜ>9'YcKOq6/ hr! O|-c~zޯΥd o!'XK?G]<|~Vx.[|륹g`#2._y{knV)goS<z)~gj߯|?e}1քon."jV>XROS-+ Â݌TV'c^{?u>t7>Be0ץUwd\Áj; a3\N y w顐L#qHk! g"A#-nsK;1fxrz~0_>d:H@l60=řY5mnj㭳+mIpq ᰇ98mw`чdoG0I3{x2Fx PPNs 8ZֲJe.yٻHrW nQ$~] طY4t}TٮWi"/"LۍB#Eqr#KFFbXbtTOObgT(uI1S:OO=آed ̑B 4ɸ[5c.,SQ1dA KYh#/uY[KJ*9MK)H\KԪHFL>:D ޛhʞ[uvAh)(ݙ:PG]$RДz)%٤WSB}"X9i{\4u"bP'>՜b֧(ѹJ'H&:}Gjj\{Y.T(ZL]tF)" g4CT%O[ϭ`lٻ_5{]/Uj;36 mʜu^W4Vw ݇;҇O{,EџKOwɋ(DЙX9MyFjL*U?3￯YeJ'Ɇ+E)(M R]h U 5}?VCHQ3ĘZCW3\(lskcd[Ԛ֋19Z-ya;fF:}ȣyS}p1jm `Lٲ0'L~homzȘ3NlÀk˦@vFo{X8eץGh kCM彴z%4Aiw/DesAbY.rgI ϣ<*a趟P .eYˍ_<9`ٍN9t9 H);M``UvU֟v/q:CQJX).R'PV × U04SPo97{߰h46}?M/ݦَkӿ1ya%e&c_.?I(; 9[ 7IK,tc{lGb4åck1CQQX[ỹ:|Z{zKzzzRJE>WA1Te E5*5TxǓ.yFE}|EwܐYEt/4,i*"xw*UYo.e6ٸwYyWOll Dٛm\%ZGq8iV;"8$N3s= E.b̐2V@>"G|BͺukrVj&eI֑}: IAC D8lYn y,ormF ۖ<&;MI*Ai"X?2ѦW{PRW΂&>-izV]G)񨛂 DlߍPm_Lz6*8V)ljx ȃ͒ѷ>;ρ3Bg7eƍGyVzMқ "s[.2AhKK@'tsTh Vq<&JRjOݬN]#!h)W8 Q39Ց"յI%پ : X:u}小.uKD{EC j'$YvW2dZ1 Q({ .*C'*mA\m5HI%aP'kCs<%v$MD ;漐$31 v)ޜݤf h^ݣ껗>7"E $J&2?t @jZ Ws3=u-tUBo}-t{z)T_a7͏|6 ~louxtQk=1Rt-)1Bvٖ!y>ãH h:i&ydn"U q$1@rv;nlZu:ajC"m]\*?mKM=K}u{wl{ݰ"Ce?OWMܛrN94kb>n ?t׫ c+go l;֒7yxMʑÂf qf:_浕"7O!ڊb.f )5ezhr3q=ݦL ාv`![ɞDǘc^W$;ܻC+'mX7u4=L<=*i-s͇6=i/uC'CZ9`l<,˜ЇM8i_noZn#Ψ&1": 0?Z9bk뛿>b{EJ83R6f- =Ѿ?ǾG?>IDr3AOVۊiᚰs\Le2ІFYR2VZɾAKkӦvh:scTuOv7Z!-#wcPtM~e!(ےfJ 28aφq. B^gnRe,:xﹺR GP(/\@VnAq֟e/,y֟@z4'sٳ, pebZz/;gٝ$-m`F#TX2;tE ReTKsl<@cJ)jtt2KA6Ɉw 9;k#SjcDAzxxJ![V:" 8: ֙=DQ( : `XkҔ5$tOSSG,-bqBmqO7UVGv71D]Ym=6w C}YuYOBo+ }.F'Iԍ鐔$*靨\Q%3xqg VVYpN$N \ɐ8Ї)G9GI8%(IUR&ԑS:JV<^XGkq'$CU({MYV:i,$ݡEz8H0\j*cI}r!5+k<:q/bmn5eCZJ+/P8VBtJCgG@Yk8b#Y$E# N%5bIM *=0jʤ$#&i(k V֔Dz m쳁L**N*iv>DjM 9l^1HQ(wAD8Igl}ł(2HiC k1Qq~OBi-6KbԸWY~fhZM̯7zg P`԰{[n$/q:TZHἔ To{Im۬'-5:iα WYРpb@9>M`NZ*`3{B9VJn49ORORZF[.mN[Dc-B޸)l7Zݴh[_ JL;x㖬Sk&n Mt˦LM +d?C8ޤw7 1Chr z*"nOT=\bN@l oٴ(hp8 {3֥ӺN=]X,SHk|&.? RgJ+K(: -JA{h=EKSa~sRj;.9AzJV٨P(BL&4c2֪^m?l60`ʻ Z(8<0M{YLf35G:# N:(Yǡ,; @!BK;t421; 6/xN Bi1,J'X`:<2D'1EIAJ *TJ (Q (Sy.81.N0D9s~"Zkȅ VeQ;篃#Q,86P ܓo,%ZFGtR=u InjVRF]eSo󫲴K-ɨΡ\!#U] ^ŗqYP&$?@:SԸ愽RHA\ORk?w*s#&'Q/^t/5qÑ}}.P\0Ο$\h7uYJoJ1/6df.م۴Ѕ i t?)::>E`"ji(D {ip 76RB-`%k"hIz>HqKgis6j([)dWRӛ@v?W+D?r9W*^eX03, !Z KtD&&#מ:)(0J02lPG}c(kFf-'2(vP._ cbJbl1XqMDZ& t]X3't&#T;"V^x/J H.-o(Y$a`Ve=M0JF`RY+NG+N¬pPeDjPSLT>6x&kZAoq:ԂP2?eܔ./DcDqmW"6"؞2B,cĚ6 4F%[ JDVo4L=5J4Up3R!DpNd ^EGTC0ljn~,4z aJN'd 6HPRheB#0}Vy,AE0)9Q@A[biXx ؖbUާǪi}ۥ C؈w"ھ+8 ^`6,# ^* HgAb9F$Q$:Ț`'(y컜 vL`ojXc ґZ$~l#`8RjgU.eɂkYF# :u6IPVp4 ʉXiȍqZ.it6R@B 7u =rcdEi~gSqEEGk@ olNHrg_#ynvQj7u~.w=- Drlϩ?|}bc<.qVE t5]έL]/q4i斨Tp~1הLQItNA P(pPb3KI b/4bx)yHbHo ;!x T:x/1Ycխ4}%$H!=URva)H}Ē, NEsV-`O8tr09;q,ާZvVqq [|E}I6u \u \ w*PIŸ= ):N1hQr7ᗿdݧ:yM:yM^ nqj3{u۹lZ>5vU@pսp-Zw|ARhNl'+LC0DBNzwZN ގ ʊ.Nr;f'g^f#!%]d{u]߅/ WlW^KJ6)ֺp'T.xވݥ=26}9(-<91L9^nE10!B Z:nL <?7郯@KkQ+gF(9؜Vg.B;ː,o1C5d9Btgdv|Jz}<))aQIo8>Loo/oa:"5M]"/%V"ޤSᄑ',(pK 'Nֳyw &O @ioz3&XIp\O#`ɓk4U><ۡk:Qq',0[$'L -uYR^2{ n.v= wSXpqeɂ! E*1#h EǼ[+=}NdĠX#SYɤzAn.D[&M'G9 s(]gˣRNc#hz;)1i,z?u^w˝ӏ/),巡MSN RN#ԉ [)0MuYuI!PG8 ]+U4CUUd'Z!ɥ&#!;3>C+Epvt#'3g]sx)R;;0v䳨Gjg'D=r޶HpvNm!~AZv"ueȢ.̙Ka3Hw_辶 b3)L/L͝q7_GHSO+[ F@›ٺnҊVZ>Wc%ϯs7M%b*WJj&M ݩ{,vP8IWrg$)ca @&׀(6ȶ3TYda v SRvYxc Ji!0V=45HqHz' R^4$0n NY$r S,C?zoZsihsP40i>_bJ -f _~ֳ5bh"6j3A[P52͠tj,Afj,VPKTD]֠pkE]],2~:?ڧ9& l_@ mdalxu )_HSıcYxu  19?_[.ؤ^Jx25qŬevZQ=g{:'giWC1֨ͲZh;Bc7^?qtW6u1919}y4h g]]tϿ"X>K^Ov@{|ɴFΙy (56rp4q ;>B&$Tn#!p*鸹uqP)WT yN%7CogIj>F!Y7zYbc'(0YKJIhoM|Z11Qv#TR̾S`u^G$O=8ioЅfzE3pDsp)n!fy::d݂y:d5>j*go휄{i$*L¬GMB%{j0YݷSoG>os3 B`윷]+nF[:}StT6E{ߞq2{ܣ;():o{l4AQH{t0X6E{6O('ֽM6ObD;z uBp$b*Ew8wZ oKHh_z Fu N]rwبSH:G^CZf]!b{ (9=b 3:Ұwߜ[ttK'y{BZΐ#7{r#uęrnzuYnHkЪN>`fm1L:mug* я]qW!nVd]1CHO5*ݿ-^B9_B\Cos󾬸U7/Xvh_XISuCP a!Vj^_dʹS2_hbe7xJU!فy y(+`uEe^-:Z/D֦г8`L) XxNƗ/u71OQ+fOBc5J6Lj4Gݹl@.ZGqOfG}1m~S[_xgqz υwzǙ,8<<]?˟Vgo| 3 c|k\k)0rҼOfn.]*oҠi[cad"uZ>5e:'rZQiq$aу 5V (\sn<xs3Oi=f_?Ym?},n:L'#phΌLoBiOCáo<\,vPrr¬t;jZL9#&JlDj 1rSAWZ.Y/iZ 6.Pe$>J1B%dx+u5s;Zgk-\qoS'ZETN&,eYu ;A Մ5ѥioL'F`s'a߇Z`b !@Z`& ⃂?\HEg6Z V L"&өN08941*Qʒ!6;ƽ0^J*6N!$=Bx_vքw"76PR"Hp`hy?o?3uWE@<#db : ǎh(6`C`DNi&ha5 ޳AZ$Hi wm=nG2wY,68/{vbhfx[siI[j'Xj5"Ud]u&\'@P2ckڎZkq nE6@MWWbQŴv)$ŀېchKAifHBIk&# d %)]7{yGq7n %8W^E3|vz@Zn~G.? YȜ>= 7˽-?ݏЙwwߟH w0w?#|@N 3aL>d拫Ko[1ڏ'o) L/W>:sQIO&xsq|5eBi 8Y}(~/`*.3;yZ%n-Q hQℓh!ȅ:cx RrLkwM-TT%T!aԣ)b$8qGp҉ja3S}y$2ՅeP'?| $rkgQ\;%B w}tyTF ɡϼ@'u0%ɻ/0DmY! E tczsZ|@Ō[Y:F?Y(\( no?~083K$)آ5޲D'ԒZw^hG-^0jF;7EaH(+˹]mpU$Wk$=nb9+ "ÔW_MPTwfP8S\e( "٫ʺCD^;EaADɬzۇ;1 .sx/hyټ*GFM_VԄl_ru:(C mna= "Otl!Zk\.%o؃*FhЭ;o<Vܦ3'hBae/X*Ղ46gx`la0蕋1 V~Xβӄ˙`HYp)\+#kӋ΢ dR2y"p79\k`yu'y*SY;_e_a* 9LU8T#V1,AA,H]ah1蝓zw0 eݨV$u !Oͅ#>Fn5Uȓ3z`I؅jr(WyyZ\OzLJ)YMFոj"ԁ5zՂUUWa' ,oh7Z.+[F"͈>؋k Fu!}~%ocPQXLlo7^ѯٵVU#aBՌȕcfA7C [B]>Y|WԿ jE@PhMgBWSglC5wgVFW2xjp>_h*7wT_4/*ĔqwRO/QZ=ҧ5/ Ți}9ق6HQoy{Lp۳hsQܛ_|>8]mW{뛓Wū{u;|_T*]Gs|Y,c̗ [)'USЈ*,ЭrC^_Nf]rui%u]R/m5i I0@F粨Bh4hd dH1ɥ,s8 twS]/ǘ |\kOwSdnb7g.s=V-J vvFhq(9jJ6P%LH1`=Ee@UR|[*v NWCD\ɤ\:oޜJq\$kYIT"Jřj.X{2 OVfoaZ;!uo7MaE;hF5DQCj1)ەXRetN5s(3=+'KU6x*Az pm"1[Q]sw-Ax "J*fkBK@JthK)_;Y*ז{p΀zvi ToK`hKsN4WʼڟGv"V:ټ]hOASu:9,%ݜznYHw7G5;#>X"#..C0xVABCrSj:pB bssoal5 ~<>C^AΚUc'_ 3ly1^~3K^>Va t-vn spafkXcNq3>Q]3d6#oƺG ԊtC/pi(0ilN*:snq*_Q034mQB*J˚/MFЖQZn8U{"Uo%|8#3ΘőӠ=:hX7T,S3J<]ySML>yC]d,.qT6˿U+'w:Z5:?c*XtC<%Z a27N~~ rȫ巕1+#ލ8yoR8/ד,0+Қ_g%o<8jɖN`y1T Lc<,Ҿ|Jh9A"UcT'94Ah.hi2"npA.5L-ײ4b}гdz/[8>  >EZJ9 g]jvG PqqD Jr3F>TI1"-,{RyxbЇT7O&N 5//&Zn }hŏo8hC>k齯f2[A*~V}w +!NxrR aㄱqoT*G.-B5i;m39:$8T_FY%43͸mە3Y #ђPxY\@U DQoMcthHz>pFR Ĩ䉷Yb| s ?^+J rDHp}• 9[*>ר.81#Du%9< PZK÷dzz.6Uzۄ[fZLBIU+\0xs5#- )eZӍ陠DZUxm|)eg"w{Cv[1wS]ڊq֒}nGjm]1J网֌թwMvޜB*yd9P?ѲR)VMꂸOr]&o?~q;8rFЦ1k5Ŵ$=vAH2+R=ú\jU(ڻ]p1*!HG/Vt]|K !T̎c-5vhNP>:$Nf cLcKRe=9åZF}nmaҊRC Lis/6Tjg+@ǪEўohۊyJ%ez̵Qi?IҩLTe/Z/yamgqɎ7E?>Zcb.oȳ-Co 1\1sM Ԃ⯵>\ kwGyS1vA!l{]'ҌjT[=71W3ZP׮.]cj~U6d`}z1PGyoF!v9_/\v{VKm2ҙ SPqns/+w=M^; DsDV^"jػ^ sjW)Z2WJRĜH Z \^%Q>x.QȠfbEDd wkm$(kJB$]VKdZX9-%dOi ,=#P#˙qb7gx Tm989 SrҤhx!9؈}cRK4g6 $r~CҔxFC xQ$o9k)cFJFR1bP 9ԕR+ JH@ 0ǓIEH!A}T*:%K}'@P^Chׄ&n}/y]U|jTs(Uy؏PN0 L׵B- B\d[W.Mg/Gߌi}8]nFӻ b*|I%ޗZdQ16ގ6ꍳ 1޻V\6-|Z|Y.0}_U׫{dQSTsdL"63XXr?X1s51\Hی&ln*QJށ`-Fڅ5*_FrX={W-g༬9gj .K,{ sjs^rEiI'En3?wUwS˶8l8y2;7\NlpPwS7g,ccc/P 9^Kez׷Z@qeI(L$ ߐPxܐ{6]# rº%n\;ۻ|2b2Ϭ wރFYi|eLO7<^rȀ7&Pot2НCTҗA1H|\%<-|Kf}n6#\Bdʋ;msJ 8x`1OYuPl}J,lSJ,CrfG'n–@Guz ə5A`>0Ak>PF?e`eսś0*9^+E,e)ś0ײkD[gS\Ǹr2=gsǝ gt=J k>$G`GzM(. t8hpaO1_)&GSǤÙNvZc &Dfe9,<<6JVVW:79A~xߑuYkLsJ~56ǯo7?}3xLl1ݯ[e1a3;ƶ^kk0NV6$*!~ =}#фaxtiki!> qFhN3Ǝ:v(tD X)%\`(Da%S4iNB3RR fehjћh&'1@9|HJ'Jq&kj/"uD%y4ce1ib2cƉL,t"GZx2D L5i'VZ##fTZJ)q!G te͔AUE~g 2Zr'c-ÒQAQYRYJ+*Qiy02EFaEbfmCW.սi-=ѳ7 Tw~Ĕa9=H@"G\aFCO|f"3CĖԛn>'>!C.8; =[8RFg9"kJ8da-1|i 98@'?"H-la=^C^`$\K?kI6`2EYOMF8nєTխCۇBJ˫{'p1R))6aT(MxepuvD1D"hŖ>EB˂2(-J "J-; KW~jv4~~.aoW{6Lyŋճc*ϱ&CRk@b!Q;u=N2MwF0fd)mc1&)mO|)mSڦ'nQ6% CnN3xӅӚ0F--e•`@ßd>YõOp5\d d ;xQ1ĝBZ)5(/sM,z}iaa"3e^ ߾>*_?+8F;}j|Jf[ ->uqv ch{2i=; & M1+g0\,)%=zQOP;l":`hO-<#"0 r itrgůoA 3TNHbDA r b@]3vR'6CD^< zkw`Rݝ~f#b>MZ$* 4!"M:TL?,40wRNwjڛVׯ*4 3 DVQ\_Jr߇$AJ~t*՜&la}˶N1S~)bp M9GєTC28j߅k-Hqq*p۹vS$-a$>n"JuN̳U h&Rղv[7M4dn%c6T+n}"z1\V{+˽n*(_]=*C)\]oqe:CP -^a~C0n.|ֵ%#5W{M&W)Ϊ?#^VߙU nnz_}e$3o>]S.%Harw7}qMST4&9< V _-+XQyN0a(hFN(Kϱ~i_,-B8 D5Yc {ʕss%VcFiQRK :U`i#(5$ i2q\˨@aĽTX-` ‘=}BNOA͠nq9UAAnqԂIU=ttOElȩe;쑜vߥX@fFCkGi4k,Ujxm떎2ȉ O\O'ws5TI5``0 S11Fi3I5*HaEI7/!v#9M*T_M؟p*RY;c.fvyN֯nt5v{Q^ݤrkUghk+1xr<^/XW西y1o`=-`~dׅwc!{zfمM/ZK)N6r44M4Ʀ0n AxX BL'M&=?]PBbSMv3лbb:hN#wݢ+RX7(bd;MZzX BL'M 眀S{n[ y&ڦ]zTl{j)ƅ`$Qj3- ๕Q'km#DzEbV yO=b2UJ<ạ̄R-Z*dPN#rխs/=$QSXܶJ̒<~WNV~c|]b=Xۯ3ɁPE ]/{n(4\?}TQ} |+T'˴٨~s ֥פ)a0n뷶 p_6dr0eF*&:ok I!.}D]uFTƬ"FKSʘT5Um[HB%Pkh]#I8hH[tr?l9ϛqimE"Hx' *S+VUTิQNo\$ Ɔײ Njdeg k8kkHʬJih`$XcEN(th1#YScfKB zՠ%1Xem䕰vYRb{s#DAmto&jddxݴ R3QmMjk%=A5޽\|]5I@Zm]y/{U/ʌB#SwaG,ٴ=\WfWP { *J&FdcMYr?UѣV <33,WYC Lb8S1w8F PMle *+HUi#ՂքC*Q5ո/2g:}8 dӋ m9&x71 ܁}` 3_5ӯ2MΈ}\O,g'Pܩ92!Kkj䂋Nf#xX`>!K7Ot *rj|©3eK3:v S^P3@ٰgD!:.s0!E6II.2|eNx@q\Q0F=}E!@1oT*/kJv\y6umYÝ ZF7m_f$7FV"uՅ'\J%.TR[JҶ@%z=4>/8< ;w~ b_MQjg'RtԌª`ƘKPlHDSeyZnvF}_W[pkfz(OnƮ߾y]!y9ЉDH XlUA 8dkQ.$guuQwce^ kVXQ S*[L]H(oϖvVRL Q\SՑ3 'TP4LƖŮFJJ#FI*FޮbLBH bĐAv,c7o\nz)Ü PBǎbCiAȰ+FN9$vWc|I`n+ ?#x PA?Pͬi`.>M|*}4 *F@U)H`,_?lVi#G΋̌V OfV@H[gD8thCɁX(wbwŖv"nND]|vbSlo+}up^;0al)lX6S\hYMp/M0OA: M]i~1ePD`B^ތeA1a,P$ĊP >n6Ћ!},CzBvbzR[ݼN)s̾rfpF6 p&Av5FJY=2g~#5-[ncuH9x$yI0g7o5iڈ)fIlr'sVGHqD!:6߷`,@\ lC޶T@n*'Y~1dTx*q5Mp9uYYa|~ qN1Ou@z7dĄi>U:swO MU%u ނh >lX5 lqKҤyavLo%ũ>mL; YV%wB/+ժsfrWY'jn@RÏ\=ve(L-3A/x]BlgؗՏWP}9gA$F93C,jwiqE5y+D<7x RyMGfFXz=ԙ7hp*_**,Q#N2O!f@t&ZVo N|=lljC 2T&t1Ũ3B{-Om9%W#$QfġJ妹߭@H_ucq,Y|q xx苽_oFa>s|'s䂀`M-KB5 kJ"Fb#ʴīs8[i4)hIYeE r0]`RGZhWKAAT 2R:*R3ilX' `g XPӺRU?~yǯHe뮞`ӑ=8޵"?v.}:d^]{n/gfnK]sWGYEmѧO;^y z+wbɱ7O\=QWǰ N^|p|62\!+ "w+a~>J}?fRi`D=Dsa 6㒘݀gUcfBR 5VTYZ+ jRd\{07V>_ť^]1+ Dd^]1+YYMjV\QBl--٩tYƇsT%o`j+Kkhီ̃G Ms(s^NDGUZqc\ڹZ$5 հJ85[V EZGtuˏ/{';<ȓ5p6X?A5)agjXߍlvoPY觑#fvb2αzk :BA'˭/\_KP vGp{}ݜ6x(` Zǟ{X {M"ơ/O_]$ Kb^ow ";tۋO~K0+#~.1bqmߊWYr1Tr?VqBŜfO_?{븟wʈƦ?"6Z}>uS1br9ޮPlH#Ӓަ#l@GbW' 8mwDMJɳEBqq؆C+.ݩCbQ@yƌ!@x>A\%[SG]1OG%SZ)pҜIޒDyy̘D&1LLSwȱH1t>Р(Ù4o jg]m@*nͤNh׳K!kzE."dA A #A@ =6K<;Cݢ$ cUD8?# JJZNȳ\N< (??ٞ y,A'&rRgY=hUTUԪ7mMuUlV߷T@ ϱuFok԰KwAl xv?%VX_譿i‘~FƟl>Zv|YZw@G_twgt7&z6䇶}ɂBC4#jʩd7PR!-o}$U '~46HM-d5 7{uw}U7Uswߴ?Yxveo;ԷW7KYhfFRU"FTZkQ 0n%5R9C XI}~^[X9:g N&NJ+rE+B|FU\2e4#fZAG3G:HO5KpňZpV'&(#䌉zD戩\b,]m).>F!{)7 H603o=Gbto=U"<-K1[OТ6Wn3J1[%x&pmnx\9~;<0aI UTEP\W .܀*clTc"EDW 2ƀo&zSf~ TL@g{oyK Q)N*3ƶ@ǎMjA-$9z[-jM f1AP,͂Ei>%So5䘏$"jDQQ*Nj17n*4WƋ8-{}2+Z;V- ^qph7 *"_qbN[_`lyAk1nEВCoGԭcwx<;GR0\y1 1CYVā`}:6h6H;xY)%ڞQ=ڪ+%(>5m[X1 w|>ftF5BQTYng`l;MKlժy@bn"SV1BXVzg,}m!Zv C=- ԟduy?F7*탩xZD廋h8Kd5`bNG$@!Z0%{k hqP8·|145QEsSkM_u, m0C -++2LrOwG^ T!G`~fehi65+.5ykѵM62;\K&tZZM>uBKi6X 젍@L {Qǫ{aD9\L+-zag{a/A Bۥv8E0DUlMJp%S[9iKYa14fZBxwV0wŠO.ͧHTOa~euEZLi5 py2[]ГIǫkCp-x̲dk^~Cr+рڎkk+-^ӧV>Y$.ak٣ByZu!pmuF&\μn^#R2g#HRWtWr<TTz iH p; |̥6՝e<¢d酤Yƙ=󞮔"_Pe$:|+4Θ\\AlQ~,LRO H^ї`n'֮'}"˥F/ˈ 4U)%[)ML*9pgsӈ % 2a/MYŒZ:ܕEZg"rܶfKI6zJث͘du˿>@)rMk1f_˪vYXC wZH0h_N~'$;?`r#,ׇA~pDjn (plӑkZG S^ZQJ)c+Gl tC6eayɠ1eRAX2YQ}*>3Z(_FCC%ܶ8V5Sl5]ΖMt?|qviDŽuxmYz³yxl!y&Ύ  ݒcq!wX_|,WBy~LdZ1fY{SW3KrYCu|1e0FY2zf.W&Z%=L7}Q W% }bY(ҢxBQdϷ(%T(h0oaJͳ@F 8l+o*t!s\qwvⴍ`%|JBp8\ҡsܲeIl.6wl˯]iV :ޖPZQ?r:0\!SF2dVPy Q mw\\G]IDtK&OISE*v}@|lؘ~%c~˭p[4]+vF aW<{"g(xE_}+ !jQc췏atg`Q.%a5=2 :j9웅;#ٯ:p"a̎fO0W g;-;4/$WC'>zḘ|@]4cD8'9*ԻK+ZdmTr\e0FhHZ{ulZ4tS򨕺f-&l/{ ݺ LCOsAY/g<0r=~WqYCCpY`.zV]B6z=WvLʥUBH[icVJ+GJ fx͑ vHZa}G^"h`}玏1`# D{NG +i ô9x#f@sd?-7~Fj4]BNPO-r߿W%]1oٷo}ۓӢ's:gŸnWw euY %H[:1]p\2m pg- p]wkR&|Aւl{G}`m:=6RҞVoMh:ȋZ5Go(Rr ʥ*%BФPRQgKާ+i]"UyA) >S*YE'VWnOWnO+RFU5eRY5;](FvTsP`h;k :wb/ѐZXWr=aWퟔ20#=G4ere9h"jWzjKE ;yLPZ ;꣱A>SJhtSpC o|ۀL(Ͻn>m ]/휲|^m,9fS{>^o^ލ@Se(GiJKS{KNǘ-Q`Ţ &̟9?wWWxva7&]t:if7q y8Iw_3'f'g_n 4xc +~X][E[rKmJ:!=fj iHT" <Ĵdևh[`5k\;q3pIBR;g'H.YG %d X >09; "<']S#7E%'T ȁyaᔳRHʼnFK\iu)!P>8})7 +;[xh@E-۳tX?;O"w gC\UD^Sd'g,S䞥 #R!ǚIϭcthY03p XI+^F¥Z`0*&$wA(5vab"VpeG+nr`ᶪ7)ݫ'6ӖѫxCY,H>x'\χ*-gz bjt5?_~8N?ޜHoN cK{>݁Kz}plY rx|\Z MEZ%NnW?zA@s|Ѕ֣dvahqJfa9*(G bМq]NYd:"HPN#H4`:rY!M~MgSd??ɺ />&Oi 26g;'ȂM$֫Qʦi%gZ3g,n6WyoE8ѥQ\cdBwYؤB]LH@a4=]pD!* )^BѮ=ٗi 80 Vb& I wgLSi>qB,].Mx6\a/ڸҧ.+jvs *=:+sOC ʎyf/)j>BbEV[1-dϩ.s:C%N7UD!9WKύ3C#Zj'L.(h6~IDQ{(+ٕABK e}oU`zvVlw˸~Hħ?im[JqsEq\dF5 Ȇ?6}=VjJw?|Q'3͜0v\|Xc4VOVd#_v 8$-/[ >j/Z(T^[EW_p ')gouޥT^+<=iwĮ>Ml;̍L'fܙːq&fiva{%5ﮄ*)/ǫe>`%ª[w!CwXy;bu}xA%WEk7 ; F1_ sɯ$em.璓V*u ]vjWt{kPz Uo`5$RJ< X(nJ@r+ *FHL(S1R4eYQFL&*rml]hi_VPhSxt-ݻEtO[]= y=Q=;^|>|Sqsi޶Ùk=ڻ3Ogjy]hGLE^[,(qIl~ &zbt+U{EP^Xq>V~Y1V0o'UmRe0* ֗\[{RMt*Q=ZO{G; Nk4p16hU|_5"֭z1JO|BDb|A!N.? :9d@ P.ͦ"L xB뫂!Iآ9$ 84t prU愨M?FvM %bzX,MOM4嘝gHd5:_ϐKspzUw5p0  B.}VWR\"k-Qdfic)lY%:F!pL*d4N8+ނ$/^CJAB7&x{*DS?}5xUc6?%(y>JeՂv*6IhlPLo Xx0,N G!&#Pu:M%5c2c}:?EJq`MM@&֧ZۋkPj@Ґ)_. s]~ī-a-nNvxwq2]ZCA\C!,?n\a[K1c.OC]Pf{Z`55%?;~F<-ȃz䗰B6YeX5,r2z@-0yF!c F#fpyVf4_xN -+V33X];8De9-U(Ap3/hK?~%gU4^d;rBֳd@)|;1=)4UEw]' ʿLW`k2t}HS(l0՞.2g1h1OJ8l~P\HO!JnceDޒ|"HyIN=H/L.g8.3廬8$($H$ܨ1 pN8525ָ@6}hg%NxexH70#MUCgıCFpz>QAQ8p} lJ]Gt"hv:[٬f;XOJ"rY^i -dSzV2YIWc[6ՄHp*Ga'MT6ZmU֞ 50 e0 8N()" 1l\Bc܄\ rK(fMi})u-maUݻ 0a9qkfvLH (ұZB$醀e\A_ ڮ~0w\Ej3Or~j&m^PY&Q(`y1SE^YEhC^)(S㋦:|Kؕ?]\tν-d/kP qN|}RY";rrwm _]\??Rql]Z]gYZhd~ _`3,t嶱 RRˇF=lX-#Fz6l¤ၻ>¨G擙|nN\}SkEƺ9dB&X r83{8ySN[ ~݈,qЏHʭHqM8Bcbqxv=jȦ4:K#L ҝX c6gw9N<bF 角Us4C)8`uXޠ* ZTN%cq`NlϤd NT:6}22& lSH*ǡ>5=mgݦU?,fK"޽ۿs9AA`bנhڗSuC^<6M[ֶm;aj['y /Mx[1kx 7`n;gh-U7rP[q&  VsV m+'x,fDlGx{=cr<=q{ST=ջn{\uI[̴hjɵChickk4g~>^KIwVCIO\ F5ta[{ry&3R!sz8U\d\%&dy#^7^`*@~!cTD,Fd%QAh?/H`D?UFBN_:xh TvO?|b]h5qb㘭yIp]6 bY^:XεMR?~}ez&mQc :!>քڏ"ڠ.EWWV/J=&T~oPEkPx8K  ]u8z~ʮNۭ"g-֐nƣ}4==m% q(U$!C4[jIѣ~Ľ⿯l`"uŒfR k8{P zi bB'2M$}tl[xgBĤt.JڠJjZH5iht*:wJ;*9gC9!0Yi?W LtE:s#1_˟wL+&4{lWl6Jyߣ & "V}Bh駜h#R)z jUJʅ=axo{ENyL W0zp~S즀 i2vM+Ȉ 催[UÀ|J5C ?XXi* CȆǾꔻݣ5}՜ }A]ڮkjZvAV ՋPLC\3QjAbɹxQGMHȘR2*Gf1za 8&//łwkOjo{ u~a N7PHBECp.Dg $sYJtD$*^9$@I<)IaڈB5+Ĝ a:.iQu[A  OQ1m2"J#%'bI yCjE^^?; ZW5&hN2j CJX{nM,עcU=|zc'k rBĹX/``։u4 [=ZSO_Ғľ\s G+=Yn(dv9gτB`9ZlEAd` uPP*aF9;;۰p5kZ C̤Sf ߒNOE9hʇTk3xj=ŜUmnal°ipR+0vإ+B) `,ǽ emJzyㅱoP4m`Ц۪9WqBcD ZO_-8uHwT䕳a!k ޕAJ[_q `7>0J?~z#%+F.(f"e,XԱRj2Ka8컫4w,MԥŪRǦ֏T+MF<ǫ1oz6FZ8FjQlΎqV̠rS{ZO5u'U+q UfQhX|'N?L(\( -<@}J;_}ugdS, ss}vUr4P bXzN$]uabױ%ҀNc:58Fz&hd~7|+qį‘SH<#MVɹ uRK8 91]dJN=B9] 4C'w? [8[Mեfˋ]ͅr' aa}0Yz, o/ ݷo.) ).83i1:ZQTSn|+o\;Jώ}۳|qbqQ.w[/dz{\)5Pͪ` R_^2t j۔pd<$HXF'z%@J"21Z g:Hh8y(g䗅_oV~fe>z:yyy˜h(Aa\> :tV\+uST)Λk<зuiM [n1yzA|M ;ϙvjj"|ȔK$v&bdG$y I4-+~et{d "3✵J&6\\M G5")'H6;G2Ӡn%$ U`yx$|dG3r8@YOEqK="TR[+39UlH"A Obs4G>i qni}t)dϳ/(* i*YkxY7ɘ& *0Ţ Qc."y*_v  _ao~$>K-}?RW(oZ,WM|4?'~P/vݛrq}Sa w3~yk?s}|7ȶ*C!>|? tn䛻E|qCYz U<ְi-Å1dYm/gXDZIAFv#uZ(\3QZ~/;ϥ,~*֖9wʾNg[C a8Q}jWjE96:9럗n,kvw{oPjoA +E0:-,*ήBe;uڮyq<Ƣ:A2oVih;_Q 3cΡ1N5˟:ylܲ}qyWrjޖqXa_W~}3NuFwhgNZArq m_Ճh)}y;*򧠵F0f&ӄ!ně2O+$Ft_nHSV+n]3qS6V 'H6pRte\*K-*(+eSx[{:S#V65c66B*Y9dCt^HoX$c<c̏9˿^~wp{;5VWf =s [+%;2 S(8GS,Ţ&xo ӜQ%Q51J]Ag_kB2\VM="Ҫu{%k\& Z ^͉ͬjzϘsbD9aR˶^B<֞:w< !:7V8֞>w4I[|wSjeL7~l=>Kof9znj42AؿNX I`|]IJ%ӿSbA),lbQUFc[:l7Լe0buY35:f]m=vZBT>03yq{+#CmOZWkd҇dju` '쫱NvD=Ww҉ݚ9&T:F^5E26Rq@6g1B"Y&"le'v٢2uuiU2ݼz~UfED!Y?Ln]CʃtESjḡQ cnyzw hw? {<d#b7X%m?>k.wu=q0N배12z&%gg[cC"7|cCNK!Kk!x R6?[oɼ_ad= Iag`X;䣋3-de$jLXK+X$EDJ \fNr 6 9I5` |KrIQ X.S״r!FMaRf\a"0D*d32HHb!x:l刪`%=/|{;TWޡUѡ*f1*d2ikHuMe_qCJ4>Fg1}QTؗ8s/+Z5go2jbj:/3֨ǓE@q*ct-QE;%DaF hZsRu5F Bt,(_(-!j(:MHWs L52s*p&HkWA@"j+v5 bwE+~ w+b݌876]Q%V,0Y20Q0DTJj?@ˑ0a6`v>B9ߚ=F,%T#Νؿ|6xYxT cfM{Ɵޖ`He ;}n\bXޘ*&Yˎ?`T'y^/5eZ> <.Fn泪kq 5?s9{F>LURCErѣ.z%E "6jQ<xA@(}.=U=#z..gMGfOP̞ڏȧ˪0QE[ǨT *5 낹bjĕi7xpd:s]yr^a!50; .ՋbkCzuw6Eų}'XEҘHުydޜIm>Bb{eƐMs ̅v--'-xlO6]i]je2y⫇dꭞ(bK2F[>JǴ1'{pEzxYŏ~fam2ڮv:MY>T|o(>-@*B+}a)4npd[obj*mF?FSJ4񋠊c*%DN1ʙ™3 > #(1cL~B.HE*3KKi,XIܦ DYbS4II#QBvy ^mhn{7Fj"S) B7_]@'PJ+EEx! p*UMI⍃KUD3o(RHPlTM<{mgl2o2rKQ8.B0חuŇUAu$?Uܷm,5gy.?Ҷ~Z=jݜjCJS&s!'xmM3a\]<-9~Z:J(I1Oi;eOP;Wo3vDhzpp!$ʖֵX*EۊۥITT Agg]bmRD&r-To%(\Ӿ/m/1!Pst#C*B&g /J\ȻZi6:Nt)YJRGlGH-ۉmG (u!?PW皐Rw.RvyLU OVHu.Յj:XgwH4-ѷ# {] ~Ci 3Dց槽xxp<X.FT>bMفzUʴJ{;㴊7nii6uwiUhK*RC￴&pfLD+$MoW HŒTd+SSN ՍYgOv3&u(I+.J/,z9M]UBT?F4])2}(=ZlE/Q]?J+F)3w"Q*QWa7͗?%J;ܪ0_B~r|:^DKܕDv2EcF@ݝlu1 TvS\9{eoR;Ojfkܩ¤6]O=Q~sx҇?)=70DwVZtpB %`!cfR7_tJX.z5T7D㕨3L2P?Ol~4L^є~>>t5Ǔ7ߖGj1Fe:B]X3qeݕAX5닓 cF6/-ĂYw=~qpwAK˴ژ^YETyW+@}qތ[^=b`Y\_/bL ˑfћy 2~PDBN譼EGFz$2հI=X$s.2ҲJrPu%IVMHH rJB/Һv#V>2'H,̀Q$jSYg,aA#RGSt獧 ^2"m9_eB*x!Jк` A̷ v1&=ěNd@w=oqk] L[.`n0^: b_(~u9/'֌w1AbK+Mش-B4b6%%\݉XVIRhjE6tSyv%eW.c鿸xD|=_FP^ U#O^|.O JE$_(d1R V~D+ r.M0g&R:!3 ReK.Z#$aub4Q24O4C#co:Vxhu0vZ#$߭A#WeXKXJU*8&t+_\",X4Q/Ƃ6Dh& Mt$_HbV[>5]h.~) P Uo4uj }l|PFEc>9/GB6-#'K)r@7OZv/ rNt]?c[䝿h%OB_+)o쟋^^سgoxcᯣ O|5PofyүN;aP4{jG x=4Bg _w Ht"U3{GGuz%Tueqv `:Z!UB4bA_gM#t88"U'O5$;ROcJyž\  FBPjG5 CQ$h>Ȥӛ,m5{{m2| qBD0Kœ#l2a1ihCǤ;cnn;NH|=q|0"$CPe Kmh*d.1&)sAXRƔvāG!GHVr?>sgt3oma5)RĦ?˹}|wJM9kf Qpc\ 1fÎ1H*adB5gi")d5Ǹq0҂HN= &S\*_Non`5);w$ 059m AR50LMgGi>Ky|ݺ%`yh9ZfE ոpW !uގ~@4ĿNF'(Q\75m #]з,"RQJvhvFuiMXԱUU ]CBn*,]Uh<[<<wº1ȏ30z;\vz=x֚7woo~gt:[HYwiS[rZ@BoQjۤڛH]s‰^-~vٻ6%Wdw~bg]G LdIx&)ixgzf8bYp[WU(iOȈneJBP#5R`iȉ Ĕ@J&DFCH0!W22q]Jaq ƣR0+R3x־u&=<fY,r_N jpb= c:vE/lnfau! y1||bMݏ?]A0 ++S_%Rn}@t%>֬6(gUY bsXŖu#:C(5=sNa˧ OW\r$gz~X?Mׇ%vSg=Hq̝"DN1  p s&[nvid ~뻏 xؿ9NἁG1FKFiM-Z $ePSv}?]Y6(x04rR4[¢cTVJQyo7i^Q?:xzNngk0WE&_L+&6t1\r0`,IVi:Ce|qMY"|YCJad82o,_k㠐9/bq1D$(5HuN8FeJFY% zVu H sp>fVb2,ch>M@ KiL* k\y?E\E!(NQXpDqvgЗR-AR"YgwBH(ZbbZ2 2gIIlIYVh:Uj^JbEiv g噥kyچipj1 F8(".UPGs"Ӥ0lH"=4P"JՉ&֤(Q!DK;5w3ɮMt0S<_Mr!*RBD\-.ϟud ln)pKhP/,h`?>@'3[&?L>m%ƌY]+B cn*RE?|`)`jס˂@ ˫޼| ҲR5kn>[.TɞLӶ[v*+,k/=b)ߐԲ#=49m̴&RX*>WgsEObtmQ(dLc<Iӽ~fpUx1=z0 %fo֌ XJ:gMH&2GL{LI8|3e[GܕO8>ys\&oO.Ŏw%(x2kVsz G\R ";]Mӵ|01,pf$ X*XDȍbD0m <2vbDJ4`Wv pXK3vPvp,52^ Sh=i^1)M!SUG;˞2\Man&hd8E^+\hзyRNXSb(I "CºIt@>!T@pQVPvR"r#H4!ƴp!lmQDǎŠ}.J)QW1ONY#̵4m2983bvM] * ,=+sItŘ~')~Bfo:5ٕo'}~ua's-Vg^b5DqYmnyRsoDC!|Ǭ7\ts;9{?wM0痤O_%S2}4I+5} D?:_?lPJdհG{-Kpx]VLvY23rqF0UGIa|N<}|gh{_X?/7%aO?b-R/mAOO$Zh9|0B[1ϧ̆Ӛ692'd6K36(8 1b#2!<R(# I7칼=p.iu-FW|r$g [4 ǥV!1︁$G+AZm7d5C2 ^1Ѭs"pD~ΪMDf 6N_Q)dbu"RfE [Oy"J/RRQکbWԲ5;B n'.F@/kBI23u-^ZV[C{2—L;v ؄S8r7A--5C^4-(_{t߯s@SٚAD۷}nO.Q*F\2$>euE Ң2Rp1||yu ,7We]N։jK4UѪ2]? թ*\e*LTwa_%ՓTLɪl%J)R҉j)ZH=(Me>قҢ'3njiS]8TIfO_OU=QIMޏnZӺS(DTTxQ3[<(33stg$&mPLKZ <rw|v[te5}Ty eTuSsGB1r&gO9/0/a>Ҝ1Am,xׯ3WmʏȴQb[73 ic̐Kۛz2Nxm#JFB1%R +)XRVn8zyG G/:ުՎ*7Pq\I;ŹlMq^?=1Aw^Q8EʰWt;]jKCLd&& )ΎX%X\{&Z, t }!{3itф_b \2,; xQu-Cӄv } u*(Qj ,'lZHﮃIw݇'0|߾9ϯg|ylVMjץh#H&i +e,$Ģ>SLi¨G h$,ݪ`ܙ8sJ,a&6﮽qQ1!)A tPjV3um2ą}HSڟ[XyݣFYw'eL0[kL!M4r{֙c/ 9jUse[?=UeԒv{C枙A2໻J任Jzfr]PE,1՞DA2„ LhS;w8J% FًZ\.G0Q&;T; mD3X5Z A.r*UtHkd`B,,WDJ,nM3X9wLD㵗!pjy!:BSfIb<@94JEQTI `U"6b9&$UB!Dm$J0HGL!+Ӟ!ȼ8J&Eq\)PDENaUCh}Se%J#N"p0iqiL[a|? NQ49E>ndI#kkIpZ'NDZqo!9 109G\q)bAfѼ{M?ddj޼,S(嶪rkw,#C Ih<= 3ĆzÊSJL:ڧ m0}Q)Ѹw) W0EiT 5m W6Z܏3F'a;"s@_<_uJX~F~\~*0x`hfe]D@FlN^-~B;Co6ݮ\,F#(o G!0sl^ܜ3FzT"0GݹskTT8g܇ b,+ 2xES@'<ͽH` +NfHણr%]qFTU4QV*Ql"]0Ys3,oa\w!<}W3_ݘDu՛lp=i0ak3rH`FYu*64P>ަA9Ï ;e\hҕ\(Օ4X&z-G@P\Rj Ch@BH!V}mbĭoA#o~#W+T05)MrQ[\GpT? +p O6 M֌Y;=E)VllllJ-BE4 g*GDJQŔ|Q528 aa4U!VPy>q΁!L*'j>Ψ:=X^*gxcS@}x6UJlRֹh3\w]uQE9q֜I%Gk 7;՘!!篬7!JɊώ)Y{b~vK'Zey~.[՚%@ya,BIT^iYN"fF ;-ƴ9.Ay>Tƹ$I? 0LJP྄9{TC^ aQ$ aRo ϋaeȀpaI47`a:F0L@ xq)mĪ`UaU,~D/[W#몇u\ Vm 숃l]e@Ir]EZ2q_&vښ8J C3roT7<٫ 3,^(NV:3$S!r(2#UkT "YOFp.Ht\ww9ȥZr'5@&(nAZ- Zɷً)n.y;QHGWmxS CrijO%QfΡJt150/܁YÜ o{@mrQ#Q/8u,G3[WTr1{*GbUcLY5HKxֵ;mOfy{~>''—=)y-a/ukn( ?]&1g87}nY w -_;{Xdӵ cnt'bob=KPbǵ9$DV6+x9mazm ,6jxyJ3[`h: kDCxZ'`֍!4(zcȞʶPn;ԷcV ټ5)fMUhEB$F[n5gK0Cyk)3㵙XT$!߹)9VeW[Y \DlyNӔ_v+jQU!!߹)MZonh7LSTPEtʶGn atnJ@Tb:Bݹ ;͵K8 Ij,;v+r|6Ozӻ?q[~4fOix'!y!I6zb`ᅉ$u7ܮ%?6ɩK۷4J&0}7g_}8XCK/]ųWn_^9Ku=yu翽+O,_~~ꏫg/:W._ ~7fvrp3o~{9Y]$Qw?zm'ccH}K{\@3^x>Ӆ9 P*u1@_/ex]fy׳ r>n"cԚ09k(Kj"|f/7).U%v#)AT-ęyp208l~ƃܜIWɚ_*\\N) U7@KxhݼhK0)`(Yһ H6]b@^Qy m OVf0l i:^L⯀_&`?Ҵ@wA'PΎf(sܙ}L&h~$Z{H&٣jh=r}[*{uX{f^1N9deCʃ0b$C%| 4RB!#`BB{TEdO3*5J_U<Ȩ\QdGGQ,d۷l"z[(P"Z}t>w%ۉG9Kgw%GPw=niH mTUV.ް 4g"1~BI;D=dQO#y6P "~ ,%40x|۵,dGd8b̍O' (IȏHH@(]%j+K6^T/`&P2!BB fԓܦ1* E,($JTVQ-s>dn`HJP 3UJo a'V1a~ idclJb; 4L R%K*ߛ{}Wflo޾巗7^ڴI빤Ǹ1"{4p{=OG]_(==Km{'Oz^3o^36|%ōg 7-s"իӛWחzgoqs?yau?]z0UxN߽ #yDdY NlY <|H~f;OU>`¡b1y޼ٓȹۋGZ .Hy1k$L,?;N)ܸnXHʌ౭\eFd&ŸIӮ7^L d%=סܛj A;R@*7F[,Vo1"ɿ->sF4Z~Z&=s#D~+]d}s-4l38OfݍuwgfxgʫհZ+Ii u*QgȓY:\S ĦtNmX章J }\*__k -I?*X,S ʛ oP[ogֳ>Ou7&ƹ#ݰk=sZ#Eⴤ%" rO8c|6k, KT>KVYjzΠ$e/9m-b଒y]Y%TodFS0hɝۚpi fc2`*.tUizN!8H3rlnKU>Zjϻ^@GD'"vRoӕ3r3OqtJŌ?sga^!&$e-Vm|rn)^)@Qzx#ρ-?Ūd1baT3a40ItcؙBfyL Vߘ94(8t>khgQӉݾ2Q_-w:} Gtc|nMho"߶MKeWLm8̙kk\\JãßC0vg˽.rufYl@w;Ip ,g*geM% D7z]/֯478:4.d ܻF: ܕ8ZXbZ>Pq\RUX0qRQ2߾\م 7N\Eh(|n nc̲~>}yqMW__,A7݇I#Rvj}X}B䮅scE'n\fX'^#<')"m,1mvݙyAY㢄V%mƋ2ՀR̹ecvB8#?R(`V|i@ ib(pҽiHJ$+s?Ӌ|֖6Ҽ#,ug c h*]5QKzZ-1Dx-@Cړ C{j6$UfvbN_bfդzr'L,d >΄kjn(PK[8b4 K$4QxTkc,!$lAE[{U ˋH Hɗ%a+H_#+ϕT2]=S'(>nJ_2WrI!'ki\3bAh&?r 0j8aP>%g/G0G=X_\iay2;WŇS*.rsscW`6 ]V٩!9xƒ؁:I/ ,kיwyPP}n鞱ZY"& ̓ū닓YLmUB^||rI ]8 ]^~Rڨ"f+%wΥ dWLè8ؤGk&oW$u#1|}`J3= 'KEpR5UT:J=@ύ3#!~iy=4j5_;F05ِi-Ig965b/Rns :}85rMGRKt6EuIdTQ0=5,v \jܲe-wPʘe@<Кs۟TW,PnfE3|2T3 xtKO[b\u A5>M䶭@XZDRxku$ω54X ]hBn1+Km٤g> ƷK/^4 'H/#O_{i߷~񾯡^mv[7Sn:5/ZG[0GiO@Sv;V$ֵ*ZE*~>=<:Cy5q|3"6ONSrE)-Wg2GdK%ÚH6C{ՆB廗 {lW"?eG*lMHEx:=9`/A&l`)s`2`5#瑂9gΪ\5a΢ReI)$π ꘐ#)?AĞ0نƖdge^?813U`ٳDBinU ,"' :VPlƯ˫_/#ШCKB! 4e_-룩jh [eIYh^h?"ZmU)*K 68alN+5y?]}TDG.+X#Di}xcNdY"yyIIg51EzocAɉ !G~iY4%%,z1}yr(ښ9=^|rga-j$7we!Ň{gngߙ?:{a%Hs#= P0e>"H)H#ڹ]otR`5f(sY3Wr ɘ(X'q\?{ܶFcڸ_49f֞9IFCYRDɉ..6e2UI# $b b2K7"BЦ@7HdDMݠ(thٓ2Js=tTn VQ!h( Q[w *BsA@f€\)vy#@Gi6{Rl[8Bxc•`ʃj4O֡B! |Hr&QΨuVMQZ`; >TkK(+QP K*ah9u$sz7$\ɱf7 nv%SvStU<52PznBVq0|w;"Ϋ̲ЎCɧ_O,m IB`V=z6PN*FӶj-3*I71LIC%1U)cWy;[Jk%ݵd!TN߮y*NX]O WgWfגmF8,nJN*z|JV;ӡT"ro!X&HpBye R**;hClt@ aΟ 7de`ZəSŻ7?7z169oNI׏&ggw/+dXǓY&֒o?:BP+3'~EMW4_!oa^hg8̵5`#E$SXI3" BW[KE*46>"cv\?sgewe<ٰ.Fke) G;8d՝b}jHt"n[v=l_NWSQKaQ''_G C9mCڏ=pct2g:Q=޺aFAmCюv kڥ57?T?~{`S,>Ro7vk6]׾is^E4FI Лٹ䒌En+AV(_I .DGSdl"ij$Y:Csx1$556{'>\ōzv2!\tf>7=h]Cx'YJGǗ_M~ £m6~JHm ۨmwt+"(jo'e)Ը6e{v`-Ʋ”^:q,y߭@ɟl=E.߿wdzLG{yo˳7~y~gŻ?^aS^ōɷ5hA|}ήkNpЏ}$=)O \;Zn~OzЌw;`d;;0 ;N}F1i)?HlrX(cjL>ތcl3)^Moɵҧa5hGC0jМOVOOd؉gCɔ+r} 9 s+ U'6G gw+z5ץ\IKm0ćd8k}/AhyxRml]Ua JL|n O_w/B>^7ۯ o`]];8ݨ"lDZ{6d$Wڍ[lS}r\3cyR՛AoO~L"YZ}ύ~7Ȗ804"ߗɜ'$nO߹c Ԑq,s` WGv4f> "!򘊌ǘ^ * Q0Lqèt~oo[ш&xZ uE8]rw{jx㎦٧zSTOE> .% =!{j, jkE%\n+;|BoHoV!Q($K~O\Q"XIh3akj!WF&4>֎^ Uo3ȵ!-51ʋ #WA*0\ِ"oi 01!O;Kw; "Ի!K`ҙ|ö ]Ltw_0|Pt[rgE.&Ɣ1߸Zw\JlҔ-zZgZqI 8aL*iAh%ƇHWRʑFg4zFc- F(9q%BȣwjCӯݿ#q;aN Qd`x zi&SI#B'jS̏䪳 !=$Y L>*Vzƃ!|,hP$]uZ_Z6uiH "'w@ܖ wWtid۝޵.|г)a2IYՁfb yQ YAnTA7$ea,AaAdǸ\2)L//>2HF%n(hג%#&ZIq綍5~rHﻙu&qw2V?:_.// 6HZ^^|p>EcZߦ+o?naܠPI?v/0Jxق |:LfE4 3ޯ%%Ttom̸2_P Rfk2h1#CbqO"=t0t2>ӕ}*-"vu*v۵# 2R _&۽t:{wޞ :JD%baw~SzdJ`LW&Zvs{_ڕsj{ Ԉ _ _;z1h/" C3Um[ DhE|naceXSRkrIDVHnoBe eVzڷԩ=ɜJʕpEEP0w *?JEn'&r@m@;GWX bocD`OK_z3tPVfChAale\x(EiD!){>Dk=Թ$b`i$'iD)>\ڢ D'R%$Nw3'vw&2-;+Y=ro ?!*N/ȁ!R1=p"ѾN"Uro}W1QAǹBcB3N1GJ0{;hG{-kκ\= 19A)沭L'J–'wk!3pK#2҈czkyv\-xyhA0U!QSvrrquɩ$[,O?sqwYA&KdTw1r+nKc©߯b'mTRoŌefD E[a+D?Wb]WGC;Hз&3xέ)qm;O*2kU;g WG70 Hߚ#̯[\p|-+XNEV3ā|M")f"PeCO 4m` NR]<^nM"TSv82vcfK +۶>g͵d5ef'+s0e᥷օ5n9$ҒqĖݻu-#d) ͜T$`EQ`TD, NYd1hS_0F0)}".pk!6wwƃӢhǍydܘg279)ljYhkw0C9NQϋ{oyNuZ7>a<*:UTh)C WyسqRAw/}sO P[؆S*;܌><dž(Uy2#B'\Nv6ݮݦ۵tv].Y >HXP!P HP(phL3-Eg'\m Eӏsq}ʞ]452{}2a?ٻ޶$W|/iд bE$-]dIe'~$-QHQL".)gvvv9os[*q 穰:nGxԍ?ە–oK{dF<ڒFi';7^T55yRfڕa: !i =W#ңx =KEB_ېZSH-4(RٶJRBuJuK[Buvڳ׵T_Eц@Z SG nZ2}h"ߗ" T崧e-jn)$Ă!Z׎6jKMjR(Wo [j&@ >X#&Xat-5M⦔H|0h<$¹H{ɽ{Y͌C$}If矾dt GSqS*w Q,r,:X#k~My5 T"UkZ!3[PFJT}v(6IK4]T2Y+*jX=Y+)W\ A#:t!"=+~"H2ڸR|;$ږ>B8ךDjUmNҽ^|'"fhZ]8 >z0Υ=QzwAbzk*R5YtM35̤1[!J{ȖiKo,^$T-9̲LN8]?]Rg:89yu?ΟiFे~nR{#޸sv8/>L[bO<|9y1o!Ec$^$~y g~wA'͝<s mxVpE1AgDa4r&'_N>fdC~/ٖ볫ysyqmw_dtgo~+/H|Wg?b_:B+^nzw{^tn,ҏ9L/|^9|~[qo41fпLƷI|u,;+Ñ{/='X'߾<|GRH$J2t| '%d*UO g)qO*p?WQ XG:ޭu &tw W4l z(9]pf*7`&iϛ5 ci⦢54[_T/'-ykRNܛ~rtozIe*N'OYo AXz{;ZϞx8wGCi̇Qshu_~Кtkbx~NzvpMv~`!u8x7N:?Ā<*ɞ{l|?M"fV/Q|314a?ee悅:Fg8{n? oH_^~';1~J/KభK6YO9><H!{>X#.nH̓A/b)cO̘2q ["g7{KDے_.QW2ι\uۮ)\ :^JqcOAX`7]XƅHJmQQ4yhX KF%S溺c7I_EI6!4  40"CCZ^Ph>VD-g:)R'FZ$ "ͤ0F 1%z\qS\A,QdLb2b(X7+Q(%]ւ)!Da˙*ra١|KOpkfde_l*{79`NzOә3]Dn磺SgqZ&H]pem_z?%98h o'3'0YN&#c"%:nҵ\>̤[3j_N? 2l w g:пVF:ԋ3opɣq6A3*O0%f0;plب<ʗxV9]' hvBBۑ%Ϝ=-sf s&Y~0[UI1f/dWr6-4`4i b3 #v*IdJH: nfysXEUjr/)SzxgYJ`YfX# ϤUEǫBdwFZCAQ%q!0(-ΤX}V?ni3z@VגCV;}_J7(I7&|D0Im%`pn / Lnt3)RI0Nxrf yr\qK:ஔ ]M kfF}`w~DQ{wpcpn]+Wm J&iWtԄDeLEcuu{B "mERI-8e}O 5Մ 8{zV1zY& fW'>:0u葷䑷ZRkIfO SIMa*~"\%m2־ޒPVd:q&W;G}KlX^_?}v扅,L@bK ̴`cK:^qdՎCdzO&qq.u^٥+y]LS# ݡ'<JX 82aZ*0X?t@Pݶ':K'`[vUAE\bMOnǖ9^+lMA桱ykAJaYȳgNxt!$QfɃhMG߄GgOM믽pECEY>4]#50 9m,8i7wP[dy ezBU&l^-`#(oSO%TcB 1n4S !e\%AwIX=~ǯwd'=$%i%B$6)I^J8 `%olJ&T,̀J8fr%]IUmI%gV$H(%,dKU7e/pF`ʂGS<+$HJU a.9hXEe4,_4OwXs^qFsk۫ pHcWfDB|Y`œy{T@{SyӒ_W~jw>Yڽ7*<_\nyu[ogN)=mЋ` }c\Wj169JR‘hRή٣ySӆw-r~S;]8ʎ}Ŷ/yݔӜxG~!^5qi 4c`oZ BuUPPj~#U 8Bm $YKsҭ+87ȄTFwI.({1k/UiJ&ڂ/b` xIڦPQT'leGy_E{c=wL[L^H(c Ή -&7ոEV{[j9&鯉!F4bIjQ. ;Ha^bjμ'#- +'d2_Ug|?2\f\Q j#<='UChl DRҷÕ090䱇F!MnF1a=g$I>gRJ}I(% LK81LW ǾFs1ͥ&bK%?7mLbj79t_oqua&WNM8s/U>ֵBkb.zB6 3ΡĚb^GhR?q؍d*17(z;+@f6dkj,cj2s2pҦ!^Ml^͖uuT$3owrirklFyfפȀr,9M9C͛ ǙujeB% !=r eLWG(uYeŢSytߏ * ί]Uů<$3D" ^8Bǒ. K} L /*Qꅈz1~o1T`|}0+Ohlڞ:^E. ['i =”=™%ex[j5Ie Kl8my){bLٻ߶ia{OHZӠh2RX$k$갽Er 1;3ݙ l$ZH f I\slmĩ,S xJŢ]|p}K<Ġ"cFvxr3+Y <ă]1474㱞iLŖuOsLmB,Bo6[4c"Yi}up>ź5;=4ekn_ZZض ZGN0T'9ZZ,K-!]! [-t 9ʡ8er@<} (mxq`yZg{c`p]a&a 8`2yQ?}ee&\"6S6Q8sMa}eUDJ& g{G%oָ1\M^=:Y@Csы f.5A6>pe%Gτa@CL:$2B ya@EV7bfkY5zv)/xg|gb$dA߉|ţ r[s$̤d,HF;'G& C$~ʒm\<`k1 ɖx1T;n0¢dk)EvRۈh&jfв![,(C!I6Ṅɞ耖}EVIӧߋ04;_-_52j|e*Qop"`$ *T B%1ቹ#%4AlE íO?n^|8|1%^D,Nˡ 67ڌĨ&IW-o_Rk7bBn/g^Ls>NFL{lOa@)8Hc*!E"!XTZTO0UF Xy629`ҝaB2z:$E"5rdfm(8T 7̈6b7Q yoMV뾦g~w ~A> /wZ ?M2W&]ʤ ^eLYself|QP <*2uo|P1nQ4Mz.+T ކRe!&hC7AҦ\`,7$DhUXEz(v0cOa UELExs#ɆC'T I 40  JBG `{G#LC)#=bAT =Q2",bӀ R1̘ K- D$D@ds9Fo 8 i%Lq#ZPĵ0 t"Xq" s-j8Z,(6o-PO%"KGh(TS\e6t !j pv s['cRaV^_UD{8߫ru "AY@Pc~dvo/ަI IqeDviv#].BK D={y)XA&RͲ蟍\( AflznbA68뒒yy[ws^@ jWBu2zKkd?KrҊ]Dd4A #_P2qFA,&uhޛ("f'IQ7BC>e0}t2PK>XZ@œw&mY4> <^8D]Wc%{2!. SgEYh0 'T$!6lNq e7cC^)YE#{=!Le&䟖eD*[oFZ0a_ [L+WYFh 1Q%!)eJHqDxQTqB#L%fȭET×{*3n3²P."JbkAIORGv,{` *@S $~ L"QD]EH  %`lM^b`og}^Vbc 3[B(+شE"\@LJ&:WىQW9guZ?FU#?4nrSߖ$d6c< aK0"S(q,`*&!qD!CNDt` ت'iiQg(bd ֵ{V"=w1 edXD NH1QB b!zJ!y."hY9#ȨXl/0DxÜh^"!\Fg!Hc GR|7 !} Sia+mp>Wܵ7=Ɣ/)2y& dTy.=@.aO,&m٠V1?yChޛ]WLA5Ch=Ƿ~L Zӆ2g-h&xgXG{vLShN͵Ct QV)+0ޒ% bF62',C5HC͂]F'C}ϙuݠ`$l9 &fHKo<Z/w9FAa@3];lS vd"@$V<10HH(`Twp.N@J}CD`ˎVwH&Yb 84!H`Hl@R@*T( x2՞k/#ᬉ]4.GGKTM>xF @~"%f펲fm6 ],豼n-p}+67dK&AsFO~3/*rBN?[ﮯ}tuֿһvAM/ѭgx#uᤇ" ݊onА\Ets? mH]H_v̌:hͰ5 Y7JOlQa0x6/mz^+G鳗;nSI.a6Oanrl>}SPJ‡K-QVAE4i"/c:*LOJ0dk!lO:1@?Rug6=$K_yQlS: ǣ/oQoEݦpUo98&l=ap͔5v% FwKC{CY01> ak!?w_T?>Wɴm]ht?vva gGZ#}0}; ;J3&Ruxx|;YW%ZӲ ?gi/?d.]ϯ߽2W^5ٯW>zro3^;o}Z0ڛ]>[+/A𣏕6}['?D˯p2~J=mT_!eYn!]{+RseʎӃc 6n<˞M(\?{rUkQp3|?qj SOɑl7h$hMQ{K|>;ԮlVH逡7ZZg^ L:ϳGwDZ>ҽO_}ֽ9֯?Y|]xQs#A0|0t~246olM>̞r2c͜VoF=hѧu":e+>$V7* ~w)o)mj'Vi  ¾?D բYsB II3W7EP&Y-,\v-JdcAڂPO BaTB=W(IBy5RrD [NdOEJ^}g(rt9CJ89WSGWUSRHRy6Nr0,iè?H[J9+9οBLe$dL;t樊[霠``ʬ``ddeT.(/AFx?Pg͉RB]/ı(GTĩŊv1Td.5J&hX_ k J5U\Y,MW,ʍjI]uzrv;ʍiRn7ʍ.r0+cyN]Mk3Z]u MNW}|duu&m<_ $V_@ \$nyjnZLuK0Z)Dbmk+ZpDK>+`(!I8{3kC`.]~Nҝr~%7$䕜Am(^(g 3GyUrI`ԘK97sS͝I\ &4c9J0!DQKM<7)'qq)&Ye,)oe$ =Z]˕:0ۧ7De-ޱ?NQgkɵ֎,IJ?"imoFtΚ"#RHdbdUbA떦d_y*҃TRֲ87q5 jc?,EPdvR$K`Jpe3¡]T+AoI$mo$\IRGI򮌖I4 "D0".2Fq Si"V !QJ t6Za<\R`"o&Yj嬕b^JD1#)!HK$,HH֒+TB7e iw.>iqm 2Jg"UZwҧLiQݶO_]ݶE TEK/ǡ 2NGp""C$Z6;^nuboOzS~Wy3`,>?9}/, q&:}O^O}=朠B`T oh+vR-n $(f3qk7z޻A 5~+ (Oa7QK!6騾D GYSp&AP0SZ]!oW>׳C/GtZL)!q$J8j3ΥW(20x`tԚeX&Xp V*F30$I@ؚ~:|N wo5 ^HZ&^2-"2#B nXEQAlJͯsNS[RT͜ꕊI%طEM*D D61wӱwKk >庨)f5uw矫;#Z_{"bP ŵBFu1AOgـ@%5 ÏL h`5)d9lx-ԊmzZb5g@#<nǨMbM1Q!>j?x!F\۞3U>?&1'Rr # fIk@)1Ǔ& $Xޟ*?,M #^ݷ`ymYQ=:?WMPKtˬ`HZp_n?@h+!BRѯ!-rz ]m*c>`CѕGq)4Qn){9υ8a:B\1>t^tNޠ7~E"NinQ`% zuߐjǴYIksdo勁4=>a:ꍃ mz;Lz3l,yC-cC7OM_ƭ̒l;¼|4RSu6t~m\in3o& ldkvo_Й~qd2 '\zxO=$?8f|9eKo?v(xWt׽oCz@NR.Εק'~ e˃Daw,a4gmh.:G)VRއ=<R/ƧqܷwH7~߇ӕGGN)|GӯvJq$J[2HWU*aC|ᅊXcz_A4Etɦ,ٹoewmonz-D/o69|$R+mFs˨ƺ6`Zs*Ujg"L}F#Qg#9jf[ pgu 1/2%˘e>L %:-A<׭9/}TdFyJF% _6u% (> X@,a䑝55Zhx͈hx^,'[Ifoe2mX0+Q:-@ĨuV4h3,LLFA_6'ӶX`DI e|T661e_!iEh<QO$_5;T!rIls98'AXJ}N0 A E„%%byꖕP3, R2r%IiʣC{X b20?z9R:Fd q VaB?1V1P)SiK!&1@ht8ZB p:j$8S:&SE܇ EZ6ͭgrܨ7IP)4D ΂3a !(j+W8_GI Ȇ !RZ6  ba l %N;a6^4Rx'ypRfq@, Q% "jZIp)JD0x-fӑ ,N9 z#Je?ٿ"bq8$46$i/ >#rӴXΑTA+YGGh%gS& V-m >0Gï6i]ƎZ12Z$)3TS&s_*ٰ`#,SL^ ?, A$n0@-šMUp9 OqeɀTS%}އj! ̳1RNd!{C"o0.*]|k0p %(P #'j ý%<:eSA~+7bJ-V`6MD( ȤQZ<^рV&QRLEnU$vlsQ S͉JgDLj$f]t,hGd΃"M'̙<{.nk[ӓӶ#8.(gF(~L3(}KҝIl>`lMs3w}њ ΅%I $l< U g<J%=?ir|~lJhFFWJ3UM>5#t1y{4qJ6q[4IA/Of.&o{vP,rC+ihKEU)#t1yewyMi(=kAkF`q]Iye=dxE fBguO\Rkn&vC/g>7EbfGƂX=I7֒q}FHGU iH#qLO6(uZ_'\4*#L|,O蒑ڱhC o 2sU-YPP)Pr#RR7kh`)),àf%/bu3dxp?'3qN[FvmnD2b?N>5X0"LNѓ'[[Κ=c0m7I6p.YƱ yBIo6 O+3β Y 4ZMtnX!Gi8"d/ {FTUI$Z(!و4Cmi͠FdsDd&(6+`I13ɱ<&@Fj}wbȃE7ZH 8)9߀Ȟ;Ez70*EP:DAnDxfƐ | /fPK.lJLM&$Nx[GƈRaMjzroӽFg4.wml ; Q| zGQU[>|xqtGJ3ߜ0#ҍ\aFFg?phv}ɌFxyۆ惃m0W'juK)w;`|-E b휊^",,Vlrv24K3U<Ʊ&e_vRtn "M7EƒϽ"g.qS !䬉܃l,d=H @Iקkof1ˇh*ܠ6hښ7X`M-c6Z}Z yc8͡Rn!weˆG//Lϥ "hw>mzy1ȓ,rPNjzu?9-d:=8I2.}Hvմ2sV%{px xk{ag /:*qh/It1}4ZUY`vϐͼJ8.oּ夤[=?WVYgmi:GY@Ggmx W;Mk4AmhahwݶO-n]p_E)B^nN~@-jXgg a;nۧ1Vj.8/΢ NjIBOHXe uy3:RzLV }iHKZYS07T̊##,Tj֐J#}JRJDe´(-|B_!KYG) u=u/u^Naeciy$6 X{'`wZMW֗(2rq蕰9N.ƮRrY-3ߵ 6 zpVJՏg%tGn$)c[@-h=22#ב1̓5)@ r6<'a4eɩˁc6ol;oZ/ҁA\`f\9 Pf#PQjj=ksƲ9A]*_7'MZ{J%[4pq=!욳ub=oa{Zl%؏n#t*]'%^0DwF;֝n?{iqu hK'n9l޵0ql0Zw4썆?!Pv850 pSIҊZ.L6 {;_h;(6 \< sԃnp\35$'ȭۋdzh} |!ʞ#M*8=0\?)` (sPih=OR JCht**STy&' r0OpCJ ztLYWWu`FOKJĝ"5` \`^ӽ!Zt 17DŐ,sG QN%zΰp3;Tyݗ}@ubCyTʩn`/kmv1 N޼)rzo[3ݿϚ3Baot .%*}F 0Ggh`%KfϮ0WouÌrE:GbԖOEӋ`uӹKmM)YTp2_ɼ QTBDY} OI1ߙ2{W$m5L<_wyPPeJdһF8͵Wd(ȴZLKЮ,L?oXp|/,8g(mxd+R8C8F֥!A`M=ЈXI]_2Ƹbj "UG-zhi8JP,Ջ, JA'AX1w` ?o@>lCkթs^]T/<I'z˝勒ֵkP@Ek^H BʕHGBza[uUB7S{ }xЧ 9ױ,!5(4FEHF G'8 .pFP&DHlfA&, Ăt3Ic&XcyM$VSRcdnPaI5TcWO7Hp{zJk "\U}z 28#3FB2I'8`IgHٜhq> ZG3p i nZ@~Tg$/@0+`؂Qj,c!qnf(0!b64Q/ka ܏WmTM{Fi?ajj!&=SS+UMstMôگ;"%f8yP9_$s?L ͹Rz-.rAGa\ ZuN+ތ/ɗ|F+662I ՓXJPu SBp&1Y^B,ivJY ZP{ O˷m`^ +B D mb"Z3d,JP!y8U. Av1i%!lH*""`O,Ԅ'[3®/>DbǍzm3e~NInfz\*J $H|w>~1m0!3rfF NӾM~DD糱iW aFtc3nsJ@L 8ScAt`6zri`'xC3qQ ^cN&}Zk{>3ʃn/tE/`y".pXtT[~ga)cȳ/gQk\_mXZ.#f?ftK5bt9:hr8NM[ ?"R8ҵF!G+Ox{3., W |ρf6sLŬa uJgaT `~1)flȒWw7ދ8KSȾr·wjз-P7MIڀOwk8ZZfU/;kr-0Z,WcB4\Dt5m;^@D\-w6^Ï> b4R" V$:A4$F$Tp,&c>^=Ѩ1T&녍`5&. C0ʚ)J Q8 m̹E)aID]8@gP (*9i6[sl\[<)j .6vՈR"!4,DFB+seE *E)E b",ƳY4b}%*s)Nz}Ir̩g2%85XKX1=v왾Ai@ߊ4@11xc@]Pqa!h6BD&K#@#ƊYU3]CB٣UaD?FR<n Firw%wڛ}lЬAzYcm1ZF`瘠mPc\;\;\;\穓ޭEy#9 }suGa"i?Zfr T]Z7*p s[ʲ@*Si#[ ֜A^yapwR.dţjeP+s֍^"C;f"sQ%x VWt,ȯI$&GCOǼ0%TeR}#scn.ڣV'd ?_M{w&:$R-r*.9NVjʐ;RZW9~Ej4~)_LE6 ߂qJ uD]u;g$ V /AE/ pkDVnȲHg1\9f7]wB{ip(sL U)M&D55b.q^^ Zi餈PKZHWe.%Aat PSJD.+ljT ".v:[w,N4 ;I W3tq` BvGa=5m,o>MjIr)nBMk%0Ý~4.ygu~U'5m8uطQߵN9;[i7v#e]r-S{*Tβܢt+);F=kh5LenW΢%x D1kðEV>Sv"m^SMK'<:8䕳Oy&3g' zzI8v:m[gԫuxnQݛQeCKl{V&dA4cUUSMteɱ_`T_ >Nr{S\xএDMUeM՘p;Lʫ9ų0tgQ*A3ZyOAOCiI^*չ4r]b[Tx*xQ.ڕ|Q0ivT(usuIt`IKʩu%l*֞X(W 9o##|/SL6Z%Ӻ[JRU14gVӹ4i z_WA #kX)q &!#Fs{!I! (cLIIJ{ŦV,;Cfx[ծ.j(.qe)B$ŔHEVX1KЖJH !h'Ċ\Lh<Ԍ eqF/;FLSG\Mgz{<3=!ETM8acn[4e<=:8䕳h)U!Ut8YV%qu|NXmыXNX 5p+g9btPk~.K Kt\ d.}zba S/uy 賧NQQ룰kk=i h()O R4v`TS<@D/< wo'ho`?~7a6=,kuFS>Hg7 >&b!^yJ2F֭PL>iOF9 ?7̆D %Fv_2,/M281_>MgMwV3aՂ|<5|Swuk[#vwfϽ7 kTO'ofB}gk %@MpWM'uXÍ~Tr++ahr*\9LxRG4TZ2بtw~ QeGkDv0BM%щEd8 حV0}Vv+ᓸU\[:KeCp[W?{ܶ_CvkZx?4ԭ=NFeI+JN\O@JdنLS6D$A MWs5kTQûqwǭr·LWq`xg<p)ΏsX̼]Q_j;5~V+o~}1Tv>{ӯd ltſK61.LLfeږ% 3RԿ$ J}&Uel2|G0MQ94 x$Q=@DPK̉*Ƃ1Qġ/(T>HƀBçLY!Fȁsș[j2$TR2DzQ[~o4*OE/RLxpOB:N|Ni["+v$+>!*S/FauuZ2tGduF [󥂬UqNFȜMh\.ӕ1i׻VwJJ?Ҟ`Q}ek_frn.A^ҩk J֮Qaa9tCH2eSڧy.G]a,@Kn'XnXR U_dVw}cġf'">1GD pCPCE"Rr*|%J̋5Y)v'@zϼ HG:KaؕAk;!, &խT7X Z:IXjӼ>bE E޾}𴼰כ.zyQ;n[oHOgw(J`pD|H'4':4=X }gu7]:?.O(T'BmeՍX:>x*Nsƞj5$c\%[*9֯S-ɔQX4ۖ&XR9Y,!pRR"mCNU^ŕm I!m)UH 5W3|֟۔eEi\Xnt(۰Jա s4]+\3"3S\6[:lv_ x \UsS޸,g,vL|jO"WR9{P_9x "Y-jwWLJ8K"A bHۣ珺'*Pzt +sG~U$ #h- n/( 1FQ(z$HgYHqͦ삚МT|Qv7ЍK9Fu9/;m9@=Jl]ãxD0[AQ6ĶpU&L w=L칯hw~q:aO^ {T oG m]K1xxiB_G=_?FShQJH2^{oaޖPP}4c{_^&lJ {~_T9ź׃Í4 zYfp 0jV5ztk$Mi.2bRs. #+_d||H(O0Bd1Ѓ.b\()`D@h0 C~mڊBG[@DHJIq)|$ C$pAI =j^Po?2[idy6<nELWJm)Kӳ5`tv| 8)y M9b@^q@xy׊Q:tm>Ogo;=?k`3)aUirw^nAF0r@^?gߜqGT?khGʅL~j ]ec$Z :&rP.n멑2F+c$F2B8J X1_QKH6`,b50D1GD나a)} Z|z𭓝uaF [Ze;fI"x=(;mqW: t/F@X00_LQB/AY=Qa^3Y֫]I} D%DJg]U^_CF$" G!c rQD.a.f0{Dڀ)]Aќ+?J@e8H}cLt#Sj:P@ː\p1o!]}I,\aSwDM-.38]*_6u;5~~bO0aLu9'᳙ϗUɬQvMb@\t7[{{}x-VK^gTor|ul\ FQ Nf7 %(vxNƈcϬk[p澷FOB襙uD`x$gf̄򌱠'у55ĬV#g։w elo*o3q; `*3!=囕ɰ5 QwyV5 @2Or(1[kw7?<:m|>ap|c%($" ß81Pe㈒fD'aۼCЍ;GFkwx9HN#wk6'9P#9Frjhs(Y#9IuL4K( X&JdI4&S2k49_7~TDvD޵stuV׳|Jr:CN[7[{a0`2NƇ{Xs;;f;x޸߽8[ؤɾGxI҃ a"B9ylxeOM@poJGHk*"v#O_ꡆ+A8P͙:XSt<ںf SQO-pǺf*`:AX#95J$1 F'>ANߴ1'k]'kM"cT8KqvvSFgQFҘMba'>SycHNC'LEo#9(#̢6yɡYcc^ȼ޸UB)Tܿr$z3n#qoϰA%PB.MFYʜgN,֍aa&TTpx,W!0H0:L ("IHQoHT޷J Bj5ނ׶3Ffۯ-7lnjRzJ2To}i]$+wFK,֯0#胲#C~x1 :)#?SpRBo@:weWd3"`ճ._MMŲ'WReM)':뷬Mۘ"n/ps;22eo@62AH"C }E4*'WLHk@TG1ˈG#bւ4_5ԄFanF A]4iyH!= 4'PsqA^AOnW[IDB! B"}x(ۇ%#~Mmv24o|';tQj !kP8[d9p`D:FSY:b1;1tSjy};0$zuܡ&q>K3Ħ|.wBe41>c6 S qרA#ol+o6qj{#wᎠ5SP5Q(nΨc,xMc{_O4igFdS>.]tig3!OC} 5j![o!Hnfni;U#ߑ@EaMۘPj~G~ LϽ=zݷ* n4Qvہ1?>?՞vo=zWzpMFh4ܸҿw߿㱁'ӹڀGdʷ&{ŠD"m@:Aep3?ßޅ6ex3PvrJ&TU73KGlhj3mf1./.?h_~xR@={kgyg_vŃo|*L}Phѣ ?Mniƙ&z|<}{=灖Ӡpy_mIQ2?::Dۙ=gۿqr4}CӓC="?(=SRf]dxl1'{mY̡G9tuڋ^kopx[!9Z?L~85뤭G鸭j;Z=T7q/@ߙIbZS/GֶԞ>k$vO7il[Z;WXu.eWjtmOy HݶgvgӬA/ :?ƿ 4*UU23Gs˪cIO^Z)^E}O9%Z>k4OfJ=Ͱҏ"e7~? !S"[Tic^oeIm͜SG^j5귤 EX4zu;#ã`Tmb9҉^geח翴űl+7Vq,c.o[s]ߛε0H5Bm(޷Vy;HV&~JlXLjfWxhip/sٵjpf~o o!Łf6QW8Iǐ̒N,I (jɛ-_{I#7~c4AqAgp\Trn$n,9!P3B[Gז}gZ6_a]CU.Kq\U/k Dn(Î6R4 d= syZi8F&?~$ )({sYe36C5;L³/tsR"و)yV×oS*=dGEƆq +Nd)_2Z)i*b$ }˰QCR}}!X_L'!|W}.xs_piyYD!dG!|}c֏{;#N mFQ[]WƋ :!۟S}}VsERNnvdeɭfw6 o~|l8Cs{eA)EGƋ%:jcEu)~]V|uCޭ=%kB#o_"ʴa"伇a3!ߎ"REUFEEo {]ىo.189u~W-{&vMDcec( OJf6y Rom5MEvOlZ|aٌhh97x)p劸Wdβg6Vc_D|"BF[V`t0h),6Xij<8[{Eu XHN,U>M:3wV-O-bٷc8R1\585at9?-b"6uF.W/]54{uR?T)+0jғ|OK~Ӧ6 (UH^#)-c6$űBFN 'FsBcMIWѬpfP pjF= cm^<8ȝW601Y2Z_V}6 ^/j&z(jV mBjN.QdN?Y*k4 PmSaQO`aFqK (&~`W~2pkpg~(Lڑ;s#IڎFў8 ȡAs-݋FXfHݎG~pi+57r'qj5M;r)}Ζֺs9c?lc5hGN# OtuòCܢaȣδƜiF qg9A; ׊ =ĕ̍2|}FF4T4d %@b8DG E4a<)d 7D2q,w-* Ehw%*Ж@'ǑE4W@j 0¸*dXHO+YD 1Ȅa"ty22l,03( cBH,-&J(2"ʩQ$Jk"DG1aZlʐ#?V6e;ˊ@cܹržt۬!8A6T8D]q$05 9B0$SHH22Ԃo$.T%Q ĘiٞDAGToԋ^0ݴ)]ͤBK OK}WdTRqڭ> D*C!CQ:݁U %IȡptY/h&|Є RSKq=uko^Ib.Sl9O._AQ"X@S8K6X$_pQg NsRly.*-+)J:܇; 2QmԊ5kXfگIr!j^ fL;yF*9)ƽ͓݀aT| z͑Fm+#|ɪ|[G)r]f7;s0Ydњh{@EI|X䞽jb dKzNS\n~tòkWz0ʮqu'{»A.2h)ihU&;~w(nocKRD Z'/,YQUt6Yy). ޳m,W|Ƚ}?䃯EڈEO`i%U4~gIZZm""g;3(rFN%<$P(3 ஭q"l8G™+wKchJd0YpEhxQPĞz4|#pi=ȞSUuf͡`fP;l$DnrVaItV85szPV,A-s{I$1s&q# ,zNb a`> FK_MQō^8/C࿯ڋ?mJ'7\ߑBn&68>x7]\ ো3xk n29L pN< YNl߳A&9Rq-q gQ'bo&9h6"mw~N8[jmp gQ'Rr=es-w\q`:(vЂʴi4K y,SڒLa|Nef Nǽ336a'l  (5&7z^?^mV;ٻ&1j81&K*g~wrBxoq#;EXX0^^{}*qcͬg\;բod%r@>ldAty]3S6>U:h +HK#ISH.HI"))gEVN2 Ar n򔡜̜0ɴGNtΊPkej'0Iϗ+>@HEjY)JTg}AkPS"R T3K Zۆb4p-qn?%tw0s,|j.-=eߞ?&FEkM:hMgKÚ5/ k^Y3{@0 #X#{:~ $"+0h BRb(=A4>ٜ'$hv#- `MOb Ȟab\N/ߘn5{5MZ#,Ul0b9Bpu5lǸ=){_k(ce65""^& FEw,^bκڼƶy %$z~Vdz{$Wr+Ҭor@&PuUʪBtbη^t,RU)hC/մ$p}ܗ\R"JlCЇf UL J6Ay  +O'Iax#HBɄgJxaFAzl~X+ wp5~8\>OFE&{ui\yʮVXKc%.[|#^L#*Xa 2dGԧRp_a JaZ=\_p}wPKf-dniհ9 xhD5 x`B$ a# ñ'pR ElѲ7ldvɳ me`T`(V|=b j(,8%yQg X[KD\.=O(&5K)^Tjd kp3HлJ S-e_,Li 9S38W>, > G~aJJX{T?IDK2\*nڹv*g", f "7aXL^HQ͢BkEX -szV0 ""$^<^_Y1s|F\0EX[&h+KQʪWg%I6<Dz h ZaVYf%UxhpsQ5n9SO^M<^5C/_WeZ=p \3|0~j.8#&4sI}TXM@u&aB?VRT*M-`x8}ukcӱᦄ@S6Ax ԺM=V0J}4sxT"]J0hUN&\M zb>KBV>efcZ":ڵ!b#I"͏%B ;X)Qug-Nsp|qսqeǚ2Q]K| Q|`wpg]YI;n\t UqsJ|޴u+th"byӾZY ijJۺi:r*cKM Pb-E▢XbHP*ݵR4p0-BP |iK)Cx[NM/ƇV4ܰikEg&jdc!37Zjɞ؎sJL]~i!9GjA}lk~ڛKLWy'P >rrŁMw;34XW y,Skj7Wذ;hݹ@Sswn',68䅳O|ӝ}uՎ(C}w*RPw.gaC} YtcRcԳxQ \tuyJ=kqEҔOl)MIםܮ.Ti?z{IZ$MMc^vk5p L!ڜ"(Ч_c% Pm+B͞, :Ps.zs7::]I_ ƶ PT"9cRQd){ h.QH8Wjʤ0z\ts̗V|K\ddwo.CD轷'KP\kP]q`r}G6a2DWX1mp gQҒ߱j+lXE}I%Xn%]vs/kݶ!/ExJѽ_,i7^俹u\QDp 4ը2tRmC^8nSkj{VV\ %6Y+,0yuOiJduf3':)ƄbXnx_.Q<@DLٙKg٨vzQM{S8y}Cy $( SKʺ Wdt5MW3aۍiY׍^7\ۍ j-xo7:p}v Q ר51+*ERyr??ހ0t2A/t|H"M>аI$'0^mX..v888\aC?qj4c0&%~z^ڄ'p9vG탳h2{E%PfU'^pҟD$3h.c0>x0ȏ&Ѩ]{/fry 1ԨEp{C0f옐0> o8@{ɷclaƇٙ+ 2K'2ߞzHpҫht'oEd|}=̿kk~0[c zs@Ƒ;1coYou(-̺篿v3l btS[GdFCB{$[TހMuQ7g1ؚ9ʓG8L%/HH\}?<,0]!3>8<]F{wH;z(|j,]z}qG ,Nb;m>enRreL&6̕z}k^ԓ3ȸrnC~b_#L$K.M/״㹜n3<7s7{m9m/z5 kTI!;blYiQV,haNSu {qnDt"LV[ŐE~[ک#NX,W2MlW5 (MEU6q$?̀c2P[ <(QB= ռ.+ ;ƮBril駣󟲸R?:>:#uGItU C1uGc:A1OUA'0;`u0زD{vq_W1߸mOR(YլQFo&A6Va251ᢟ..`F΍O^8xssl5)B``{E67m|.]*o|lyi ;݌w]G#u8\n埋,\AG[ >l~U{)Rp Fv܅(Ҩ޾p!q&:S0SJ:SN0-4C+σqFK&lo^f8մf+rC?Qa̳LT NkQ=xxxyH =|BJQ)ǃkEpE#EL?L?m0 loTflzؒ^EY^'a.˿'Zbe~gxP\Z;jJXJ PG"MV3[zs7P *ZaJ1 0zyP T$bXiPi10 =jfbӪ~дQ|)` L"UHd{1 40 E:"OȢ~ڟڿN(pn3᪦;F/TΪr&dICגS0iISGzQpKpb֒S6Zq`s4u*pu-f\DjɩNr0 6囤"T)9vkPAI}RBɱȗ36W?PKNu*C[z:ɡM;QKN, u4$#;p3%}Tj)[rHYykf\I\KNE8疴-̱Ƹ % ;ऺ$ ;cs4׵ͩRrHHVgs̙̕d^*,drr4 A$LNh3M%j!Gݗ|d"[7{ %@l+v<EP 8^rq7m%*6r+P.W?'TcE =Ȧ`(!;w`O5vQ鍵{1jL9#C4qx(N fY W#2F罏,ăqyIK Rd:HGzi&D'ZʔQ74  bT[뚢a7jph14-ƕ;s>Qyv3"KB2TWY>^7LF5\M)P tc\JzS=jzHI8 I8A *d*|ETmp@vm#UT!&t=%*)moU%nU%|C#?ܬRbimXmkan=d||[;֭Og@FAoQqqRcPkBN8i3l3A:w~d'{{ˈuq5FMp {%պ{8A{yu=L?fk"`%BO:d >*m> ǝQCV9-4(}8;>8 3 ]fqMÑ;hrq`,ѐ qGT SŔ0^mTQB}!X9":-.H/?KwU0vգtIҴV3Bue(ju:WemHe5VMYJ(ENΰ=\&?W5nJU fU=ok4}ؚYk1,K9hw;=MdNgt SC>yM2=0 #D@8 %0"1GB&&)[<njdI3A n%f8eZ"E@HDEcFII̴1K1H(P Y8[(ba<4^|}i6>Mµ6idwX/A2ve_dEutW))3O=dYKB~uu@?k(_h/Ha>*th.,)ٯ"Y,0Ti>mX0 &lZ" z]eF z>k7=}T^BP8PO4QgZ?3:@uVJHl *r' oEE)@gbS됂߫wZDԕw*𝚰HVރ&S`NK5񝴀xn3ٿ$:|R^[kK|7Y |E9gurS[bj|<2 .@[#J>YZOjw X#3 3OX un ?|?# |Ǟ+BK5 ?+%ȎSΒi{jImH9VNOJtEᱍKuagwd̓8lQ<ـE0YZmkAl>Ⱪi*3u2e)mO.upaekֺa/K`Se= ck+[AR$/mKui6}M >ga`2O'9ȸ0"X&|)T63]ك(咻+M~Lć^;%,PW,pvc]ߞ[HjmBG&9iyeHWyFA~#6l?G7Oz]pxScy<24bh: \O }[IIo*7?e:@s7A쿨=_x1:W.2Ԇ'LZn:߳Ӂ9'o2翜}{x /8)~zgG~zŏN<|m*5^=ofl4mřG}ܺ 侽qcqa6N~av(s MOف~TTI*9wf,Ѥ}_z,*X,;կ2XĿOJ3ntOV'oq7$[?Ds7{QG̭LփNeIǚ^ぽjY/˴hvsnXm|5JNS3ҹ닋?OzıcͧbLcNh]9Ӹ6(H3yCiPoG!;(1P*՗ ]Na?>-3C[^=mʮ{M|ݼw63[uSL*7#+0)I˹^ t+t'T!zR8P6$Aeb(k$,Mim1o`B_ Varb*Ҡ{4_sX naSsEV*/-;WaTQ>'שѬAqjBn*?fk5ܫ6:k o:[e=iX7%ljĸ0hfLB&?ӦþZHxsb1i [ }15Ne#󋆾XKcٗƲ/˖] fFʅ2 (,JTD(1ƂGDq`AAho-w~2H^2W-XV` [gdFĝ6&ȶڪԨugʍ? DTh|MII!ԲXHRʴv(ԢK K E846DC8d!VQd>#e߱lW[ 5ϙ(#z˽-uf^7\ҍe`s1R-P/Kߴ_k QJ"`F1_A@oodx8P<T.u|uw緒yg"ށe.s1WuAꃮF8*pӥ>5⁖XOB\;#R!aQMb3z;[^g V- inr?]&qFRYF%,WܐFbwq7}j+ ?33#Nf^ Cr 8S4;4 'c"\IsGО&΃6*O<,f,(']g1%g}${'ݧK4!5Sh#9p'_'slxGNq|dFЯ:ֆ#@L̝voU2A=M(.0:13(uQ٧pжM* l8s&VbQE)DQkEBi fi|pPhܓyn܅BJ31wA5Nco=Wv k"a)97{hlQзݼV j%a`6v6{vnƶ2T^?\q;"mE٦iflTbm=Mk3Ѧ)UfGxƑWeڦBLH;N"n{Fb?E:XulcYnM| Jza0?l/n.p{nf0k Հzp&VagnMv/.^+)gGoXusM`TGG[tL ~2>[_M~'i}3.zFͿő h/ugӫII:YeiJK|NV3cHyȉ6FJ[ e|s2CфjA! FR #!C!!"#$sch X" #D.:Ѣ:^N 3bUƅ_M zYo\`O} TT_=ﰉ @df9`O}c zR,?[QU3d0[ϕZ^ev$y5*{#+GߊF_ZIYVQUp).7_vU `F"ùagW=LjxZLb!vHCl4_Gy:Wtz^=)M9 ]Zm75Rghl`x8r+cj=^udB E _kmX_E=p})u8(z`I.5RRw$-zKX `[ݝ2c,C.$jڝcWnǧй[V#E%9,S8K#""'[bO"|Ԛw| TK򐸰Z)_dSN6 ++ݚ ey&)PH;ޫB-zx&f9| 0OFG$ݿBF';w),kHQ$bLT0Ju1*%D#G60Kp2y!ZS(y6*lT-81Q5 D1!RqS1BE6V8V lL^^R}j uq_ O-%VL  Y( AdcpXÈQ`WlT &ذ \oj=y;s]F#C$TYF0FX0A(pU LFY*D("exF&uC&sR a"^✑;*>raQVh: Xkr>oW:g 3`OH-#_bo|ph^(~}蛉է팜HC|M=sXM@ %"Hf/hsq2LW=&T!QgONEvBX(Q2iOu8 f`e#J0g`mj>djEsBh- L0$xf%)w,+xB9& .5c`7Z|dɬ( ].wͩrǣ^(]+qbgtw~e|J˴.^kUpO6gٔc{K&vjnS8ݓS)٘BSQ(xTn \҃0+ A߆b ,Hୱ|Gf`J P~߷psa`uBMNOaȆ`"92}`;eo]_V(EnUL$۶0k%ƥ$:JU(go3}KqGƚ*9n{$@w`5Fi`1v˝沃nU@̄F[eVx,X!ғ!8v]}޹n? TaAj&%{BKP#¿֝VBm[YS OTNٜjȺlЫ[c*o~9 urnTS,xH]( D6\1}h2MZxxW[nm¡|#jBpN['/. s8#&.)@HU9.n5ۙI/T}GFP5 m^(Uʅz>3>DngԷf%2fk20:yO7/FݥDm֤!O\E+Ta3v_=usK`ݪGuu]d޻MV=Ӻա!O\E|#e& UwhݪGuu,B nՏ2zZ:4䉫Ny\q+ҝYȖֳd:y=Qk8}Lv -wT25jd&ON>Iv IA *Dr'h <݁V0=wh츃W9;Cj'{N+v~Y G)o/>4e߁S6YO*Mw컃pz^w?j?n^JwRE}(D@!"\}2u%v|/Q0 [jwxݹ6p;wҶ|GA` SQ b"77QWRZM"b+e01>6\@}ݍj2j+%QBPT.F#;f/a6[A1,%?Jx:zu,[Sh*(7\O:md}õ’ |wXw^OI2<pT1 xfTWr@C2jbk+O,nU+7ZqNޏ%~6UjDw6١=jIؤ y*ZE4f{ j;nUy:UQƺ0`zZ:4䉫h%b|#e& wi*Ku*u,B.nZ4䉫NaO,FnP:{AhUn`*8:ӳ^Y_ B/پe}CޟڬOr2ad-C/~Ip]A`JH|rxrLP&PϽyzcJܚd~&0=ӟ=LCWy :Ο̳ddYپgNǻ;=ד?JU*Vv{rn++rٺ2^ ZwoZ5m]ŷ|1I+WA{vZtOHl[{&Bʱ2ne/,>қfنf֙\aMG|t88RRu|7۴Iqjs7y?N$0ƗIv:&04O~d26zw_x'g ?m7M?;^$2Gn%woӡrg8?~ctb~1|$[PBWoc'a1=LF$m1#~b>ه;?x{oޝ=__ُ??g~{ۥ6xtozcsAͫUo2U/qs~,]7+',MC[wCM7I8& Duw2w;Kjg\Av&۳Go`?N^c0՚vs2d힢tH|$Οpα;6%Mw)U'v\N^;{j;̼v0ufx=qv>NV*{0٫I?ҁ+3x0L%'wy( M'WpK 9/7 :g.7H 2R#d2 ,A ?ſTXGUon=R_p܌L/3t+~y{z㗟=zz ⧔gzߙQOMϒw0$ oIT Lr^Qy 6[5sM8NGG_&[o $EZ#ύj8.^zvƦsI}G_Jb8\r6̍ƽ<,YTm x=Biߗv$oS5|ff}monF-D7{+DX6vzW1sm%/YqTY]8 b̅5d,Ed Sf)&h9)C+Ĕc\W'ˋwKx' 5y,\到f(Vq ), 58UHqhB`DPkLt,qѳ1࢈-"EtI G\ %-CAD##(B V[ %jmJ\Z,Z/"a#jkc+,R؄  *h,bULґ ` iBA" ?)ABcl*,kQIA=DmFvdG[CiiModڣ \^|UffquNw++ 8WmI;Khvۍ;G6nt|l׫((+ [:A{_sR!Ix/NNc2Axcu9.^l^e9ӛ3ѻtvw^/̋z[w:#e$*U`$Y+mDu^9iD'9Jg}4,]L^QUXvIpV9nKWɭq@%6WW}@԰)J䦻Hj&tL%4H(ŬVːGG0/\GeQ5eJ&JU7:Ίc^DCx+!LCŅF1M41a 4Ղ<2GsM%k{_Wr䀦5caYJ>WBiXn,wđp5c9`_Xn`%C@ |  (LT\UMcyEMs(beD=r9D9JG2S{ˆxj l)F>EԺ~,;7\A)n E"C uTt~G֑#c$:܇p!Jo~W1d kak #z[ *auL*so՚Q;{qo_5&ݤ3qwLr"53^^H S$Wߟ}}D'Qϻ0M:0'-~*__L-/;m 9Q4Epgta(Gx:2?2.lj؟h D>,~U(6t.*[z;渳ΒR ߕ-|:lY #‘w39nqMH֮*MWV]2]!9L ss6n nէV>CN_h&5ӧvC8^4`Z2N^ H3 b"[JjuLL*@1 6G2}\Q<e}B4Ax5VYlWl+~kr?^x`2kY-*EjD0(a +k>9,$ UfI;ئdSUT*c N_BuUKG@,09Սi] Q-‰o۴P~^>p\l` -^mYPj"E2w%t )S7Ph&P(EXKPPDZB)HFwYPHe.Kw4ͤ&xǶ#!J j;?KD>Rެc'G,fPYLBgk/=̓2T Ǧ+c+DCßoqخ?I67}sN͋E@IM{fԿ}QM{a I~ddz*s~ӊdO+`\3^{ 0mwL*Z-G46a4O&IbDbŁ &%$l2?sXSO!pVnZLP⡻ lf%es/$G X YD9It%EPvilZ+wG譭bBoI^ mkvu+"H'z [^/#"' @tY,U,Ex=I-9ClV'1r##(No  'c,@AF{:<D1LrQO<!CKcؑŅ~ w9p6 W*GbO2|~Hҡ&$*8Db/2K"FaEg5K9zU)sbX6,!'%QFi)| VfXhN"VUadaJAGSNl8NϛuJ~)cGcDB֮BWAt-]Ŷ]۹G{ #‘ètN99'G=ndc2LL kkAV.YPgdaSJ4RM|1'Tܕ""$iF,IjX9>uE7UN<{DT"NFbu&1.0ADB&fqqS şu7|鲱L`zfaEdke6nl ~&i׸6UC[FjF#!>)J $ ( BXH&K45 a=i^(^*ɧ`-͠Yp(|ABO R*JQ%3FZT"!b{La?B]6K:P+u=p"řNJ0JWv eG*RTh凒y@s#أ~yX"$" TL⨞V4;%m$ʆ:sD a[+MuFVnʖu%dClY֘Օ઴jY] )%dVMߝqSc9 KA4h ݝp _93jr&#75IicϏYwL {Sx+΀km]\z*O * jADlVE4Rp2}bњRR+q.Qs*|E=7BhKkyl"ؖQRG0BʱM5]yxUG Zj$y([TzX"\H8.ʨg`)h~Avaa5R!<]Wue0?֚pF]CMxPŚ|40J6a>> %+Et*jonU@s|$K3 NNߞni: )jIgp5j=SڅSd뎳-*Zaʺ6SɺnDCĎ.zևw7+#[cy[{Z=mQޱ 2xlBDeN[KO(H3)g=UW?owg_j~r_4'x),"-޶0ڏJn}ʥߒ+JT{V6+5 $u)2UItۅ̃pΎ 1)J֘^rpvTo(SӬvc vފCΞޣL^arEE_ׂV(8ݶ9ћ)d{M | 9˰!rĬf?\A?}w,5zJvalp/KphY0+xhw3G 5Q؋W_ K6:L{!4t0L¥̀7'7Fw$=7A4 FOaLRYϮB39 y-E,._G(M4WB:{eUx[QNwn;bWf݊'q:Z24WB:%?B^n*CV.Syg= i+n8Z24W%rbZI_%II0t܋<=ac0vg6lQCoyajv:vZ:L߽Dx sDd*Re.spLCSo;ŝ;VlEGΖGm#9JzT4rQ6num*Gv+U]‘VU&Uu%&ȩrP&]UrGc<`$*yN#mJ,zT6rc<`砊cUFk#ݫ9s(l zʑt9ZSYF )S!Nxys2L\*2fɵ9""<+}LՇ, 4LJ~F-Nr,3so$ t9l}w7|<8]7t<: g4vǃ?rtGh8ܸzpڿ>ci<6}DJ;  r/ݨiX݅wx ]DS&鍗 P3߰Z@Iaslr tMR۷GWwvvyq<% OWv|/o^__^+_WGo-߿t'vFp ׽1~ooܛ$,i~ԌQw8k߻kOF60ڳTy<wp1i`]*FoO A#Gc0)W$X, xyl=C2U g^[)vmUfQ>Wa hkSc\4 O/6F7xܙux)X.C_tb4G_\; NsZy~- ]K.Moy|N5!4?Mnp~>F`z՝Xf5k٫ǃ~?;Ԝ0!~<8}O$F}P7?'?3Hn> &ퟭ@dnbF8&zϳYnz4%/Wv8 4|O?n콯H;z(|f<]r}yW <gONb8 u&m?7/ȳ 2 !{z!k,!it-S"*Onoz Y"W7{ DZ'e.*&Q9o#Y] s]s6W4pw+eC&񵝻^3Mr_v2|ʒ*ɹ@R%IHrImKŮĻ fϖ+AΓ'7wc F]T+C֢ݮ`]%cW] B1uFhԪP&)k0WL&Yx=l'"Rd82SB&AÔ*IXHQ< !J x?ZmJNST3Z#2,tY$fv X|LXbkAƝc GhD$'[4N^0O8 Ce3{=p*(&!edE4MT"PS, &"x)J %qJN@ h%Mc22MLM MwANGs4X4 e#:%i?6 vo98@4'j~ ,d$?Л$o{q86i8g8қSFmGq3s1h1/Lp26'@|03G3r2__[?]ax}ox5`mv1_dP߽ya}WF'RvpdЦ+7=geu\f/C:.y2MԼgr@ ׏d"Ӧ47&ܜNuewDT<4 Zj`+mwWwގo5[ru7$ѻjTtQԳa[=zTEsʼn"?L;X5J(*g$],0$ːYwiL^n7?VKLj8iәrpQ xV^ }i!1SF1,Ti$!*(}*",{T#l>%S%PTjCޔ;80q?kr7Zn:X&j˔K-0x~l~gѐ̜fR!\@TS~g`3Y;@ȥMܑ]Ze. 4+~:Kl ?)wg.fjq*΁ߥe~Rjw8bY+BYbILMf_\0ze2R }.wYHɽAFjlܻ/OS#3yf`;4-dPE݂ g4Nj5@P͖<`8`hE 6H;8Y>}sCnC^mtBSbm=t6BԳHբ8@|:];wSjήEJ0 `NAeƀS%xryB~$qQ z$y0wl-bVxsKH!r+\ᱩMl?G5dx+;xp0Kzma= GP9~ X^"%iY~nF:E: ( CmFIV: P\‰[Fɋ*G$h!(<& "J02"TiH)dVfyV)Zrc[/;Y:X4l(Yj":J҂/JrʃY/ǃ4-81m0A5΅gJorS8W-0@=fsnɎx~lt;Q%6u}ƈݏz[%-Ͽ[l)6j' ,Us= o}g`Y4mzZjo5Cux(]s-l_TD U !k Uw%>6A-mDGrSJA` 667Q3;9G* DQgf>xԱsgem(;㷬 euPGCŠj%E~Jb>@3uj]yT(5Uٙr:(n (V]sjavFWF97Vs30ɹDnhM] ylʦn&RK.KUGq R3ؑǍHIe}r).y2xH^f)/31My?3O㆖_pG7MfUQsfgWןh]6ռu]>{sa]/l1Y9srRlJP,Ӆ"]crvKJD.7dL̆ǘm\jy~h~'B Ew` bLRRB% 1f΂'R|1wj9ɝbLcé/OuSK{.Zb7Y "R 05&fib#̦r*֫iM\+7ӢH:_O +Pw< 06Gǻx1 dJvk4 r9{d2ʠm7tp*ZO@e}ANn?-vbXx#MA10(QI&0b.XEn2bm&DBeb4r;F[=  f"ަH20tOE8 e!b*&( Ic6-1e&Vo{!uܪrSP!7) Zwd{DK!}AMB̶ [ jzS gFR)mr,Kwd9JϽl`eW][HmZ:{4mY1'M &+ 7k:'V:8Jd̓$HAA☣Xc (&RI8BBcOȿf53'mbXx;5lT~q÷լ**0I&S%8(ͦz@OKpi R%Q-H0L݇h.1\%TO?!0qA Yš* u}Y9(6궮'H$XNZu+ vzR"Yq֧'m%9 I+RO"zB 6;^3-Szx{b•.^H'ۋPo$kA7|(DńvWu9:,w3T9_RUBkW-+ Yl.YQh =~o=)@aujHnyڈ)5GO;a\9|[5&f0 y*֕uѪcq02AjB֋9Xj*Z*PZ [j(\]oTR,/ϏQr4ɢ6a|(oB{̂TOj4|獫zRc͡lպd4kofnz)06l*mq@UԏLx6gٟFW\!uwxWsVv.Mm0(3 %`o4H~q/pQZ t'xuUYp .)Lj  Y85~]F( D}I㍱eXmfa" o1lt.' dDYt!,(< %8)ܞAR|^^UQ% TPUkpp^'EZ GJDm:MXf`BE_ K^b?݈C|K|f7Bc : !ЕkJJE_)9s۷9xucϬCpwb mʚtMGvp{\PMt̓+zlOG/B6U @3b`_(Be/6qqZvf$o&c=?AprH-*l,2vǼ썜0_8DRs8ê k~ǝCKZkG{=88&v~^ :7k<mN:ߠ돲7c ; ӹ$߻>;ѧ˹wMp4._C~)7`iX[?ڏ^|c)=nw #Oj%;ސ忻$A?  :?45wuhNid_r軟;ݞwL'?z<2"(@WCdw.]4LSݓ)g*7rtmw 8؀?bx9pfƇp+ *x>뙛A;-pctQF:9S^ewShdo2l{Mcwue7qfˣ:Cp0ymR=ێ.jc`pD|atu8/\R+ A1c߽ȝee"3q^gLu>|!8s (E\0C+=6S.3P 0~z]0 DRG g{֋/ izQEa9 D;--`d 7б>QQH/ *St 1_|bX}HXs "5栭1 r 9.^V"" wZ: He*&@D36@ N9SEՠ;F|i$07iU"6 S ާa.j#—} _c*F Hl[?tYѷUr ::b{ f.=B}g]GL37 >v]. G-t9n╷-鏚 8oNN׻õ7qa"3H/xUL};hd*UD*(-T3 jeA;[+x(lg>&^JJdo0!KuT>3(5ʷ V.6W)n7퉠t9Usi5Y%bfDVɧ*L~dď)A-IӠ //6s)7 1|{VWˡwoZ*sIW.muTnR9*/*Uksarml.])l }U,C!xcK0 Ujzmʯ59DXauWZ ,/֎eJ@bo#l5X;WPsA~dj-”y_0IGg%ESG޸Y;liǸ;kj]9{FɁkXiʦ}y5kk(<D 7oϛ4o~LwK3{"rm9RqO΢kTnY&)HuqPut۝4Ud{mTE[Z<%哷8]JuqPut*h۴t[?dAV,2O CCN0oAc_u: Z2( V1CbWDG?<_n-Th~{kͲeŎ)*[V Kʖ5fLCIӢ.Du_`}ru&|X!Lm$ þyqz+hS) tL0/!Sĸs̽Dޔ7I jbdyY2Ԝ!T'j+;`+ _( ">3ڃHH+D|s&3/ld.*/R.w&=P AvJDŽn;ƫ2aB緳6qHZ60)9a{iq̺ЂI>(np,XX(TL<$U`@FƮEe)XYAx,waS;HMQ.~z=e}h,xH9P<΢ ݼJ R4&<獹ؐ%KؐSq1ņVLC!AEDN3[\6LI577W~x$Z`CsM$>i_ʼnM!q^f5gOt]|FW-vաb j5yu%rVEiA`iVJ]i\a -Ќ0B@{RCzj#;h7XLޡ<JCs?LcPe̳獿-oN=@|< -5T/':nle>65'F)hC?^\;_'09 kkn~vϓ1ن] Z(p=>{׷_~9`N|RHmo ZRdDZ\)k<mLxçn| Y+Z*hw#t|WNܮ9'@;;==zyuqtuS/G>]νvai4._C7dtăoqbeo̵bwFtMdrw7$ǽAxY}n˾DgO3ˡ~w{޵ۡp~=.0[ LI)>@ 2ए"7 2hDgҝblV)7@*WV,! ?%C柶=ss8h'ӷ4e2_*cv,+ 9[!Ԡ30@-2G9/"~lZlXjn=)6ojah3\ޡg}6c[Q6X1bM*+7 }`!PBE#܀ng7/4 yFuf㫼DuN<13k7zfmELgfCJbm E4f{=EDEt%g7ܸ|5`wmR(DL#d0'0ڣ AoSdMƀ1XZEYV# 譁uAT.%ˍיq| $;7dg' O,c?u Ώ+ J4DK-aIx2$Fx1wIj)!gk(aIع>$Q'UABn6OLF\{4 a9UHQU?+&*pAJ3`Rx X18r-n9FfMZsrNCh$%k:`fAՅLBb{c=\V\v(™E"LgO(2{>ȳgO(*f.ioajј=1#\51jqs!uzs7_*vhXbrTh\Hؽ,*vrU}+$^N OvfVh&fUepbFH(*#E"y01 eC!) H/S2 ȧ:K@E'W* v!C@{h]ąd]x T koS =%ślH>>%/?OӂOƑgތ@aMDyg4~<K' 9Fi5ּg]3(v0QFqsoo1t>s锊+ɰ.[-(9YF_'=Z )mpր%L"֭6w$oQ.un}GB`Q*C} SI ee+ (#m5m X!m1flr\=9Jjws3 8 ө+nڍ@& 횣ڜL̷u?ưtV"w8h|}شrޭC,XK-3<[o F EuLnIBT'7+e:KV9:nZeVsW9X:Ƭ$?S]];ѣD8s4kͩPsI /=4;1Z\c>828io=~g|igYp|>mխ&JFi0BiQNc "[(C޿&d2ۘV_FI0j,g?ًrŰބŧ#h)[0c\*_Ds6}Q`q坟EC,#n6*G \T#wL8\,peS h֜4qH,_rApv;“O_($J i^f:-!FvAR?h#4;u<6``ݨr9T e$snF*a,e >kzM8}>=Std.OXF7D8TWǢY1-3'w߯-U/t$)X_4g2<`TZGQ<}gcz/vmjSmց|PSczZcGZE:vaaUp]P5q/" ܙ2u.*t3R1ƌ'(y F: yJqd< izG늒@窰9[eGh\ 3E(׌I(*t쇞KiIJCsBA8HbPJ< #s$#k!iNh2Xr)p"?<B -DB_ SG^$FƌH/&H? #w$#o!L*6jި"UrH8PrJiQ!I,CSAqX*/X)_#%$֚A22D4r®U-aH ^9J0W|eV| _w`2h)](ő`.5[R%%4%`a7\攡T1V i.LJ,yWT Ӳ\ˉ㺏uSa,,>0;:Ȥ`xp;H3+4Ad MGQy5%mYpMujфբDVL݄U㢎 I*no,@vTEʈ<U)+"]!N ϤEʈܠ܏oph(g) KYTk\51WS\i]i5C0A @8;O΄´N0NH&`|prQYgصsۑ+WLd}$ȕzqC2|o*kq-2Tp7?"Cwj|0g#~  ʝ5y͍su\\ݍ(Q9&b@NE ĩd-Hm0 ίϵX~n' (_Nභ 0)wb#+mgi!E'5 q+fWץ77PDlHPZ2;) uAK*w/;;X^ZGm\F<4uݚmوxfs10Q-FIm?Lwu4 ^RibBgtaKr%چR=]Sayeq4U SF.~:ڈ?̫vj.c-ۯIn$22Es}Z7+֭, \D3Xʬ[\RG EL u4q֭, \D3X[jTu+ѺmCBq\|Ϊwpwg\Ğ)@%kޞrW:W@?8vJR%#ϘQc8uk76L`==R%NrΞ$\Ve亮R][m9jk %I]@昷iG֜4;G0txBYP3r=fTZΎEor̨? *3$Eʳ"9lc~;gL|?| 711 @z^=x2Êb'%0>g"8J|VlG Wc&:7I6BK, ={b%V)- g~ZYg8i1I{,Wj+.m 4fDɧҶ2\}0u$XiGA(E,Ҝ"PD3jh&yɖ#/G*q T %"A=Bh b %X"$"/z7BrnW ƴ3SzuZ9OaR<՚Sei]?PYk5mך ZkNeC&T9T8ruxi#naaG1]%T9Ѯqk5(sZs*gPt9iwTFH~C/mkqtmx侥UC0BXϾFw۟;>sb7Cd^N47c*osB0K zf$l5L*0:.H7ޠõbRc;nܦc4.|FU7vmi޶CQCkt,6{h'VnAdaS̬aJs[Wm#i~tAh40{1zmК.hpx2.?~ox|tlm2n׷^wuA^Ɲxx7ptzϦO^ڳ+0.ܿE&Π?| p1~F?Gwk)‹ٻ `2߿1c; 9E Ϊiɨ؛~>d_4m͏7osvvyq<NsCOk'g?/qu?.VIgm[P?Sl:ӻM柽%q ǃ'}wc8=n/ܛi\i{{mGhclпLƷѡm`2;?wxqۏȻS^~D@0)GxЕf+ҝk#Yob|Md8؂MN{6;?|k 1o@r>^ӿ5fx}Iﵡ)|L'S@I҂+?d#X!8a.jf*i/\Y6(pzgs2L/$ h|'eFiC{2 lu>OCjDfW-y ܲ&>y~hyw¬%/?z|jt Sjԓtt7dF_eȔH+w0٦6~=4fNr{Ⱦ5A8 A;Gpb^?gfK_^v6*0˄6=:47{p[( / IrnC`;8iMLjffc_+P lWFj[ Ï__̏;7Pylͬ1Ƙ&6V];\wktg?^^p)wߞd})>^F}d ,(ѡA yڑ¹t;WN :eF(L*aRQM}Xi(43Di}X*X YBfH+aCgKIAʎתxg{#SiM%]+i|yQ2wFfZv"g5ɵɓ|ݭRm ўS sMuVGΊv:5v vc+5q 9_y6cJ]>q*('>=brT:S5w9M=PdQQ+6%~L;R.}uS*ޓ/H$iF(sQfӖZu:(Jd:ۭ!Iίm]oP-/[NSSHLCshղBuvcOJZjNsov;LQBovCs M)Zbh č78E'֜ 5wkT9n0ClNUȄѺ𮜬¼{_ (t%U.3MW.-U`*9 >Sd%]cyrΉ?uN Jrg:=lF٣Ղl$v_=),roQHssAW./?&_]*?{۶_=ǒSMk#vZ=A@.oeIG%DߟrjnwGo{:@!VmМz򆺽hfԩߡ'w uʩm@g{lR,A^<%A-)ilQ^(=`iNժߑjtQd>~G6 E9ߑ(;ɍ~O)Fר߉'wNsNNt>op#k)"jd\W̜sٲ<|6[`Xˮ_mxS7B7نR1/<ʰkjfo6ka;>0 +Ch UӾեx")=M:āv37 2]~}j)A-)f bHDZQIsR TD`^٦.EIs#d+òž^{oij~-CCLQ&kR?'R!:)"Q0GlyVBaV,1i@yuD~#d$82dR! 4t6rc@b9D4H 'yGؙ|oJ,⛕Z0 xRZ89zIxږgٻ'Uew ClSOA} 5&7G]N^\@H@ O9R6uCЃ;>{fMM+g"@}5'̈́#E9j&ש!!87?b4D]덎:ز*  O#b`Otw}"v'3luM0)=c0om [M$uZvIZKhtv!Vzu\k-`@jU4ŵۺ )0ϬQ&Sn}Lse$%R\hv<뫹bhd7%yyȵ_A?bq8<Ƨ~WOJ/YBǼ;=ݍUl:*ozAHFhb~n\5==d8_Z~0]1ѩ+^2ʷCT^%'7:]@ Sk0E]ƦL ??qTƥyxmq -9F)؃=v9zjx6}W__^>WW7oί3o߼̡tg/~{O/o/3˫?ߟϛg|jܝFwj?~x'^'c8{/{Aro9=.<ˢ!!fFp26 ~Kw2Sw-΃ӛi /Oy~{yh$6r>6*e+̝IPg- 1>CƃG-x=*欄j7ļ7xv8jx3pzΆ'gVxrMPߟMzǧt̕9TQOSآTMK3W< =}m.#H}S^D}Qӭ'}_mv?倏dzwm㚟η[׃^tPX[,Tu7r*>e0 KXw_;(hѝ ._'TS[C"~ CfD^UT'zO\y${Ot>)鏤 >t}vGQӛH7zc}e-]/cqp6߯8b mzQ]fOȹ 9_#!!6I?=ma~fgczhsy6[\vjM6b5 Y4mD-bȢ\cJRp"5b:֊SOA} 56)NI<9odHf}o( 6EHw$~0RHzʔ!٤[cO/6dSe,\jpX/`@JWUo{|-9tfVY"sŒ'(flL)aˬK)н2 M/=zߪ C@kќlXhTc>ۼhv1PM`ۡⶀJ8F:hoxǬE&*21qUY(whgV{1hṂf ΍S؁ , xvuI7dl h*k-$}|Hyr~@H9`rlp# N>#IaWCxes"P{-dߋ9q܌Ck7\ Ъ$Gb8a8ˌ\߰=O7P/g\\Fc75$6!pNrnSY.fa|d' #%ZL@K9"P<0"B*QP$ۙA1)ǁZxڷ\`/Y['$iliTVк`Z5u!dWpͳ=;+]%y|N=KF(0{+*MMJ!-ߪ )|_.oiQQɱU[Y?DaAr_f}"ҍ3;PAamR6j'?SSfUƣ{HZHyHf'tXvnu0{+OCDw J22z:)]n_~w՝uoz0>9mO)ܜIn vέ*Fέv.)ΩR-G)yy-\v..P$s`Vc2k鴰iho6-N؁-W[ >g=Wos4qj wFFXL$a@r{-$B\QG5 bRLش2N̮cGU-FBm_6U jBh V $%('U^`Aі@d9ف#b4d7wF_!+C80e`HYYRwqlYj`[Ycf~bX!lm0sɰԈt"ϿzXtGHl8xyP.z!F+lr %"䲷jE؞<R?rqrm}Lڳ7K"2+5qMon3c*%= [hlLP͚T)ٲ7.ն9?O߯zhr@8g"1_9VDW T2i J2Df}Vgk|EWM<ׄ47?;5?g^dPlԬUVNɘvvȤ½m)?|?~dm?}]!>(xdO-'G]LQY#g9ӛhM[Rڶq~럞5?un{5֫9S`/ӟ٧^=ߎէ0mDZ5A9gLb[N)KzJE#N{xVvyn6d:iuy+$g5f,F9P֑ xsN29,}S)mQX*ʘuKhi̹f\}zZd8~>!`O#X";s꘣todJ#_ZN|nL zm !L}j8o-l)R k;<@_W"yb[~*r +WׯlQk kYS1?EϘZey=\SdA5HS$ӧkGq7HB'UQV\$(s(+IiI$OT@.D{-x6XRkT5QQd/Q)4䋯oUmY tUDhbbS1 5CBb--{d=哩VD2%T+άl1r| >Y/Od9!K?o}%y~v]g}#>(;FJP߾<@Ec4Sg^ج)CR\kc+,3xŚܦY_}Ջ }nEELq=vcD5A偏ve1Pv+_SU!!o\D`xv[YN1hK"Tw9[3ڭ y"))SAfVk>󆝉AfRSIfFG6Xr,DK)lDD!TsȔl#"_bDd!QmCm3<>s`Vר)C"%3v{PRj^%U#CTA轗z^!JsJn.t,6M[P*讎XOGG[Y7iFC7ٜ?2[Ta^kF GB擭|RjXXlW{_&nUHOz $v/K>(A9} ; ɬ 9f+EI<(2p|QU| ; +Jũ:Rye#0/*Y%NP]ULAet1  (n Z#ٸҳRg˥CYO;{V7vvn-%sCti;I {\vZCw3]Ħk4+EDdQ06wbw2 2= }?SZIhR|Y a8|НΆkjbJT &2{pKűU\_bdzjEφ<̟[9cE 8~sDe(@v]J%®0NgS@&%އ  Gs{+j@vL̩0oH6UyD(AN(Z>վGվ٪ ʎ`>V2j]9X rH33twsA(SZey18maiY1%S7xrkK :W20"hĒ3 \' J :(&(&2R$&A$ ^gl)Qi0wJF% @vڕS"`X+C=4u "[VtoUu5BjE\9XxckP Z5!Hr\9C >ʩmPn JrY8峞z6yo9.3m~$bJ3 MsPDjs Jv|ƢfoHQØ NLsUR@!=c<%#ec&g|QeR5.8D[_,hrrsq/pv4)NcgCo8JxGņ~&l-@TVOܱVÛ^“"I7„"A4?!9҇tIQd&I0rl6}:tf2Ye5;Өb ڦ(21 H$I#9hP7@E M B"ʶ\Nb}ʰ@?Y>bȃw#p<yI<|u D, u @QKC$DR(\ Pfa2%ya)H JvuF0'*P6zf]I 0\҄!#QL9a C ]YE=y{Tb&Е 5O 6O0x"!8$(DX**T"=qY qE噊aJ5 wG?ow,Xf9b(_YFHܓrB0~j\bճ*դA;E;ާm.Pf-^J(1?,[g0CHd{1]4<1wTdw0`dK.φ0J#\~1r_=30%-͓%FѪB"QM!D\QDc ʺV-i6ɸc8Z4;mh,[ <.&1.r/osdT^O/cg+oY PX椴N /Duۮ= Sm̒MoS%>O5C6J-ֱ*]&f~3Ml7O M fj-,q>ueH)ۗ=w-{h̪7b]eMٗty#f_[,@n'|F/;^~_l}Ջ O" y"ZJ2MIk7*x[YN1h Z1jn~<[ELe܄M8kl j<cnLw&tzb1__c"zR(yfZww< FE4l?cr&9n8o0Yf+kbd=|no.}0׷+bh.PUPu7Wm #5%?-h1ií:Eװ41@λaY7}~8{AM_Vk֗FlFBXO2zKMO D0ϔŶJXyƝۏ?ïwv)欒MZ_)PEI% 8S А?=fv M9f*SO&SbbLm &v4rc+sc7O1s[ :N<'6?@(YqGC"UpwYFaƊ`''+agc.ב*ܦ՘^W@aԺks/c6!YLP@FybB OI J%"BىrTϋFR;U?[< )pkg͟}p}! '4?;?eiWj'wbI?: \>Iq;2$Rlg=auMFR%YGױu#jC;.{޶mym)Ͼ?I;-6ZYRE9k3$%Q(v%Μ}l)baЫV`vtɤxѷuNܿ\~g< ۅQpnSS [i2܉WbVo:ߴR=vbriᛢV.C!!dy0Kd8+WVޕzz־';YMO jpMx 7:Oog.}?ҟ;Ñx2\gE)WdXe=3JY2EzXi [)޺x6'?eOW@9 C {r$<;[drg(:(m2/[\k "~."5ea(6 uJZxv¥wCy< 7 tQ`2|vr:›O!L5Nb;$rl"xA^0qo9yVs:}6)K&ԓlu׃@#`d~)B@^[e~ytPoQIDIq.\ivP?ⵯgH$>XM}??='ζru2tD9?~k_H Ix=Bߗ.$wlӡuL|rv}Zh3z4[LvZ_ `ms%㞛/&Qש?Q#+ٯԟ 7Pamj/c ;di9qw,ƑpN]VYWH,,dY, 1,%>_˽ 9%]UlADRuDҔlǜydnX;ݥ\a$1#%XEl"80%bHP1AQ&Cu+?nb*W6߷18O23Zͦ8ڠηLE-5[c-xMoM쓑10B@T&l_ 6ƅͨb+%fˁ@!7 ވb PvuT o&BaRLa$15!fQpF}8 8oV\K8iKoV QHBJ`o+hx$X"Q81x99gb|g$pO7!@U!x> m<PK) [㨫bCx(,óP  .7AW֕+ |]_wAvR~Y\c6]G)짔zZ-;7}N]"Dc}.rvĎNk}ʦg]%nmEOHPVq YԬ87P28V 9QT0Ѵsu Kk3wn*pSEiD_pGkgZ95r;2-ǑMѩ8G:AeU sr[`pNmCnHϡH7S#PG줻PͰh86qKyYʣ 7 dPgK50Ωs8v u59-2_NdS tcI ST򸣔RЄB*M*<&@CJyJK>Ƒ//8[ogZ( 2$ CG G, Ƙ JƨB(^o"@ g5a?m|c؍8]yQ<ol<7/ K) &9U%RG65ǩN nIXJoЊWtڊB+:ۗZѕ+a$z7(b5Oښ~W#VNO䰧( y/́ .;fnvMc+aT:2VFwgQ{dn,]I O 3~'.wԎkƓ`SYͰ2JC@S(m9壚6;f qB{a?(`|aA}P a$ d~8ZgB¾iʫ8Q>M+g&6 '/ qVQT74,Vk}#nk6PCC$(F4ɭ:$c^{3D}է*#n(kkʙlTI1A^Ihl=3JOR#R8 cVQ."` EؒXfESZl'q+~wyz[d2#/6B1ȩs4i 7qu)HF@vHZpGK l7o7K4ۍ-o7JB( |^f9|(baˇq_ˇR$LZ8V?[Z$N1=CO30Z|,^Z ~̠W/~|||ೡoU^k+~[Voze[gt|JݴDE$ḯţub.wb) 7׉e7BY' d*̚S6ue`xeF>c; 4G4iǺ:++`S8t6 O?YkD;nYȝl ؑmc0jjש{ +wD2$\zWIz %u2.IIA W>Oxi B&WN(砅 &U1db/8B)&A$VNaD፲!I Y 'WyÀg'Ir`co k"6RvCЦᗝqK9[9jS<+/х WPTN|qV8cAe$Q>,Q$0'4O؞z%?Žwe΍ [;O*M\Xgo[\Dז_5LKlliy|8P\U(c1Y7>b?6#yL(i! G55[i%-]p<-P+czD2ѳg_F%rJ0t;1"RN6vinX8}ux~\m4[Vz=Ia~~J%q;E˛aD+F1JmaFeq #b8haLaOngA J0{N ܎Z9D4Ng(APZ:aA7ߖf )$!{[gd gџ?66Zؽ; ݿ؀ +OHJ<dz-(CKZ.lxH0,JVƣm^fH8:AA3,A)jk (AMF}ưHHO<6aclz+봈נ$#mpmF0ǎTG^1cC Z>`gLl`OH ƅ߳g*Bpey7IP60%?z'ɖ2D875>` B&/[s/l7qŁBbG8hGlxn@y<q @DPO;7Ecp7_/0:|N?yaI’%͋"i.xZ}uCB}„<(ǔ40 cR(?"T1D{Z+7f:QaiEe/3mAAB1> zAB%T"E 2"rE: 7"&0*T`)em-MX$x \+FŌ!*ETNnPehn*Xk9Y0LËһJmʘ>+ڃOPaZmv6+]'^4\zF%|kvߚ  ^D \#߻" 2F]A6" {7#y? ̴ *4YP26`[_g`K7 gf$ +.\pr2$`$ aߥBHI% Y`GbK2aߎMxP8`e|cƘH#E1A$dč9&#q`:+ 1p!]D! 棋 $"1̊a[Uk}919QTJL IGL4"z@!q H Р{P DqhTė(4Y(7K`m*fOƫ)`Kt`}Ⱥtn:^1X.\S:,*/+q&\J+wݠs-\X=RʔpP;H'᧼o+VhOYJA໲ӶӄfQc?_XuFD h")R3b!{O۶_”f_%nZ4iQYR$9k俿3$-Q5)JNԠHq9ۜ9s,R(UoZQWYI5^cxZob;*`4(lҤp%XJ}D0L(ԙSQ^<1XTbwCAʝ=h$OKMkL:+ekR7u+V[Z +|RJH1s}`֭\[>C,P;RݮB{d,OIWXphbc^iBMa*<_p:X#KAS$ gcl}brDN5%-A2%vܾ!am~-`Ǖ^+Ba}$HPjaW>8nųː\ݑ|O&zSݴdVr6ŧ1GG/c+"Phˡ SXQq7οٸt5wH¶ڮU^˹W`<)U\ 3Z^.^W;DRl+m 4CsSРȤ;n^`M'Fl"HKaN#v[pgHRFSU^Z6Wfo^nb.(YON%)] +n/zƷhI[]ذஃͰGuVj;a+{pas.T6)RW1/0#RoW8b +Yώn\k ZP IpCX$&zGd%܊)S6OP0IwBL3'ۼ9.^Ng{fv!hiUI1e!Ե:m퇷6[zu pT2p;0 -p[MƞrG征JqvݡEn\ DZ~85 MApKP?Zkׂ* 9%R2G+NC`1 C|1:pɰČW6DyoeId(S̗0!/qhT J(5*y$Ar}P1&͇ "muQOJzXpVre49U*Wb_.+u;0\c|!PtSw"XX"A5R 9)lm^B/.#l{(ź氡Br],\c m/ݳk`T6TOls^,(1PNcs#&IDI͢vpޟtlce"߸~L.r;MnC&'n*~BVWT8ar^t`l?.$tb nmIr$[i:cǶIh/uT#8nkpyhVk٥FZ7RՍ\c[77R\ۍ+f-|PX&pǻ)&×G8 GG ӯ ? FIh?mp w{'oL:NoWsx.N|?=4i kyW LhګN`Co< {$z_]^9оz,zzm/WΫWe78].=˔V7 ?k^wooc8{ ={k9x>9`ߓ֒L{A!=f^G}{23'v՞" 'wAsvh0oo@׳C7| *eKL3'~pQ2$3hc|A?bs:%0Jͨ_{]/k7ļ^pgz$l w[p)4a5gOi™)˿TOSVƭyHf£UPk8i$2z Hxң+37, ؚ~4?m}j"`NH8V'^C?ȂN#ޝ/6GF7`?%48{}N8:~ lC"=f0HH${';*aMui74}ɓG !/LXGO<,.Hc?}px:eܗ pIۙ.=+3N2w$sM_Vq,e. 6S}yґ*/ ?Oș ,u "ٺ1+6Bh+s=U3RAmkM}\mgè Ǚڽ%moyjnѨU_ S(P}"68VaKJ2HKyr_Gm4A>&PKEQ?kK3smrC2o.i7kJI\h|9~G8ǩ8o9R2o5|zs֏N:B`#Hݑ1|p;:SiMpwq>Uop̶;f,nyD*Ko&&}9QvVOC**3qDcޤ۬.^4 ť)ңWo.^dZ/MϿhܦ\<5<'fD˻W[[hЋ9w#3;E)`@nSRvwߏ7=zL{Lx<ՖR\\ };S_!C[b XK߷"(sc6rrlu=+~P)0^8sps2? ԚɌI)FM^(V2n")c|q>xiH!*?Ps )ܮ~ᇹ{Ei*G0`\3U~xy-Q(^V w=<}"uS4ZK;jH3IyDANs}]M|{Nۡi }džeϟ9cwX# h;7$RwM29TMU&Ý+j|״GT!Q5rJ*氛 mJ }" ab mw`5j PlvM :ա|-$"A(B!Fh3iRxW_ K+DܸPhCiHZP/lՅA Y}?$a7 V"}!bʏ8K$źj0EI@9Y& QGm%q1dy @s(ܼ`W#$YfJI[2J K.)14R%}/ɷp| E%mgY*8J>T8ڪV\W,Ӊ$)!%ذy⃛4?|ӵ3I!v7ic]skMqȶ# t.Lfndecvdz?2j˶GU"sI|jXɓ>,(oCf5'jQ QЅ%~1c<^oucI‹A`!TKXI3C+(j0AhcW鸚#1cբ܃<\c2D槐fu>ȣ+@f E>CZ"P*$"/XV- R<<^_:;XW>z҈S<\D*)Z} 1߇^d @ffKwz?͵a>E~l1!Q 1d *ՔǂU.1 u@ T`Ru x\*ιoWqkvGi|l!z8W8 !;ilCzš ƪ O}ORMQx_i0p.e9MRFc;))*ǣGa,ek 8p[c. aM,YQx٦kp;m+d3TܢP$ O?o81pA֯xR+,^| ;5}ДxsF(%lHn{JY̙ƟO&xtI9 >\Һ( &yVAҴb˅֒QH{2 & Ù"Nv#;X {(+ d(PA,y*b:Q`zVj8b $FL%c%; RCZsc%TB GΤ7eՉA0l ϧT [*/PTwemI 61[DYb/vcbg/ D",pH6oM I~Hʣ*3\2Qɟ3t5Ca|:Xg A^:r]?3J0T)38#,M@پUܰ<|\d:F)\B\S4;vrb6.GJ@nޏTLj TLPy0Ad*$ ;"nYi;K!0.#q3ɳDi! <# \ZWwF*71n<iDOq2r%Ffl TTyA+FvP mnz+ BZx{tRHiqiRrUm V‘_F`TdNJ>Bcq׽;%S厱Rv.WeY媑I7cWluErG3|L{@Ǜ2QOR&=*eңjf0O˦c0m[}*fTLvia:/]˼ǁ.U+o`>]q>̧bPzsʈc M#8jb4˽Bu-~ӈQG>'be@:CF&9Uj}r}#舔( LЌ> !C[ Nٻ뚊|uo c&Rھt\1e\2ģ>[ ͨ@ETԇȩ#7Zj=eوJhF4 (u-iPCA'zi%7aе Jk5#=ra;;>Wb齷drqd9A(`$j%j: E4@ -/GvqPd>\e%^Rpe\&}F ԄhΩ(:XV7bl~Q2Le)(f.H-gF종'L{'ie1'l@o< "# .h)ze 57J QQ,yBy0)ҁE10%%π6 %Y jML6T!ΉUKI<*VM/w%8ͷqNzl#+?ތZKJ-v훛h#1~8grE('!x6_lj3E'zR=;zgɅBs7ihE3\ C$+SJg?/'5AXѻӥLY}rpcAokP4>KPu$-ŀ&<ɧIdy\\"ItR& ɜQ6M(-Cmq .do>˭drcB6C6!!-#፪lMҢlft\X9˵ ZڀJGCzd&@5d^6&W'[M3h95=H>y-:jQpNT]l8ϙy[I[B2Ќnm1uEDK׎B$YNc=2#McYF}-`Tw}NZ+]7H ,q ,Eq: hi{Smqp:5/8AĈ_vK?owQhTNw jx勇,*J1V;!"-2cȡ%nP m\ v2c} &fQ.k;{TZ}У5jA#u8w4P"-}da6FLK%vR>6^*07vQ({nz5ŅLd>)849Nj؍AGQR J#XlJhEyB[BnvY/Rxuг6 H_V'IR"&FCDf.Hw6'yzﱳV]g[WxQH4 B; X(»oӢ^y}L 'F KX byB(n>k:j-xoRrAaΩRk%/1 ZA{L4RV?eӮf-MuӮR*(zM{-?!+`!9+d(n^ %+7ԗr=@+"Sn"LNoLu4 zxVlVh[U m5CP5(PVeqj A!H:T}Jاj&}tj^2{yPWK Ӓ<"Z1ޥ1:>[gSr ;IOEWzUMrZ#^ӽ޼ܘ5A~>l9)ttÃ5]vqw3_3~rqt vlL֥k?`[D<o3t%Ӵ$xyS9vMLzzWyk܊Gʥ]\* ѳd*R%w:5VɆ5 ՚_0dzw烟.Ɠ >z_ق[Wvhnu!{9Gi<[tQP)2{ysq{_PBV41Xt2Lճbqޤ+F҃@JrS r7DhM:,#ʜms_3f$4YCp+u)7%K@ R(?}JP]>kö(Z gN㯧ѕ$F2~ݞE&kT|.AM̛Ձ:YWkz=tEIo7s./#Tew^oiP.:˛mm߱y izCEotj᩾{;Q,KP<7;DX׌nhl'K +ggI7.mdJV,^_f}nmiPGtھv;UxѴ[Fjj.$V2Uq>nD[[hNEXUzhh]օ|"%S5o{Τa_|"N?\&Wgy=ή/Vn拃rl0)8ͫىuFn޽^ּEqE}龞6{}賮m %^Mo47pΦ`ns纵;*yMK/9tYSls\3s"0ZS+C].v"K2@׆fjk=ZtS&\5%8 Mž+*> JSBmٕP[Jvg?lFZ&jOE=3ɿv,ZޛC2 77L%[g6|&ܒLWZBr@F:*FAN8yָMMYQ)> ufڐ\2 D4^Snq7$IFfN*F9 0sUFd'X*~,.&-,b"32_lHM{!E0fP:"v]Z6)ɄX&Ȋ'5iD=x{9h>nMe0`ӿNˏS׭9 y&M"6oS1M:.V/mO:m tltۆqȦ)!k<趩 ƘΦcJSEn۰7n/ljJĉArlܭ6wQ> U%aGJN; .~٢mzs#6 K*_BjQ0j6PTK=$e "[g~{SHu<-֫q p]\_IɔWHyQJNU/8m,I^ԥW5 8#P 꽡@lJ&HH5eWj# YJ^~q|LB`,nv 7nIU))Ba$|XܼlAzw/kҧՐ^ YJA-;]љ &TwnC$(ʖIKltV#lE%t9̫z($ኆi%3&DDIe):ŹyBaw"W/D44*xsSfBOLvFQjf@r [kFIN.(B[P] |[ˑY͓ٳ__Lq`g)IWhPAq\r PLH81ؠw~zK3co<^ tEIJJ:jۂo%˳]$ԂPsnW<_JGE1ЃsR w _ǚ!r4 %Kn(JRݒXv$n+_yYȂJoK U^ȠEqϾp ;=tYα RX7V50Ŏg/l {۹Z @n>{ bP\s +CDF}I9T۲t)#w0n*`^ : r?Je.w@#@-,0)]oUmE_ 9+l7͠꜊ IJ)K/QXjO=O{oITߙeŇ4,4V$<=Am]\ϜL]Hi8`kz[^IŸo {Wj' ƇNj4$Y'Z;o*&Ii!a$0Zۀ?||-|AjU 3Xa/$Vՠj-gsؖ@~L;[vz⻋)&Yrg`\ ̒R+&;rv[uy[JRgA\><>]팄~ƓI=?L8E,=& 38Ka4m5!f\JfjNmls#,Llg#RW I✤= z| BA*5*Ջc6d-9NqoH=8⪷~lNGFn"l΂> D i3bQǐ5:|,pUOK=DեU 9%TP~f1uߝR3*JRͧh}F{J )N4K_Kwlzq>ib׊ $1HL3j*$>1iBGGߎ `?Y'fQ8R7kz4yvypdMש_(tNQn>C5iZqAAFBsSM*xRb ޣN6,v a4| t&DgTbW$h3Ftm ~tq|T*=r_@.*ޚ $<86Qs[rqd݉5ǑH2,v؜V~/ݯzTO"R@|_^dS|?.zVJ/n~wOӿe 3I,C w6͡VP^bjrQr^!ak8E+}g< 6cǩJVFY]ٜMQJ9-eo`3!އlߒkRTI^U1%ze^)OTPSV7c6lj Kdqfhl.*u첷՟xʫMPo'*k*VvKO„Ŝ;0Ңǥ֌Rk&&l|qc\Ɇ-x#eun6:Qe]TjId:]C뉀# kG OlloHM8ee{+KAGf{Y Ns\<;i0 ( 龧!lt)9]ܵA Vv{IP T>{I5sȼo^fϕ=J[ 6Ȣo裧X{ocgݵQ~]B8RB8њICs0b.Me6̋CE9!Iv'~=~,TrH nq$ZU(0ɗV:jbbVIR*$n 5JDDN"*P,Kb 1xid^[N|{kj݉`hpİtƏ2?NVW18'@mƓFQ3&Jڀ6CTt3%ce 65sUJXFgAb9.3{ \:F"p;`;7l״Dzug, UhGmB@8e܆Ŷv05Lΰ]ބIZD6H/a7ZKZhޯӡaKE;v<*=tk$k_ْo?8UCcȦXbHf^dn$&7wOl>~($0%i`H!U U+UH&*\J1?{abwf[7۽S 2opE<͕@ SЈBP\Vϭ@B]ޫrq/ ZQ!)|(j0мBkgU;;!سUbz~1P1)J'>y3% #%ě9w0)sS8YA>Fv/FoM6BN^}o_S/lDfZmhoC V v3EfM՞r;/j*97FK4[),%$m2A)"I@PP=)H0quZq G.oμ PB51l5xDג7$dN)L$Hex/8)IL >P̱HA,d+#t8 v[QlA 3ST=ƭX$± a8A-?E#[gN1nSRZ}rUj Y1iw\-UY4KnH&\T[Υ!ے-dkOҌ@MUVNF~u1` b 'A,cmK3 Œ[C\ bW WxifODШ͑ beZ/ "^BbvQܝ󋫁Μ;&NAZ%ͬA \pYݪ  aDoސv}3ڕoGL;nڑhiGoo[e1S-L[v*PU&6jw<^lx;xnɚ{VzAۃTz~9r-dcswP Sg{߶m!?w-I%iWk&0lE@IT[v-ml+ұh-Zy<I!!F cHY{*Y4M&Xw?t. I R\%4aEHgR6^Ts; bw `FD/}-#R2#; BU$expEq\0Ai:4$+[KKA(}Ÿӏ!X4%&UT)JkLp$@cޛ^ 9+>Y^nEQ+^!Ǵ< { cYvCޝu`_^:10@ā Lu 0v{#ÉCx0\{ѕ)D7kX]qd,/ٵ ;AHVooڃFudf5RQT5MFv?9NnC#۳OհB$!]cXql>F8&>pYì\X\*)[ɂ\*fA4@;(z("zȲ)Lpqiy:XD:8^A6d'{PŸɧ@E${5tɥhbKoiZt鹖(]Yk&YǡᦙbR§Ԙi;PYSph8ԦE4(D,bRTy2rL.jmPԳ#Eqs;O˼a -PH@8v895ђP;_e Ts R{bKMt}/m<A%G_3Syw2> I) 2"4ZT44LR2h-ν/:|^7dQhg艋"w(nXxUωJ}ht$s8y }b^ q ޝQdÂ"u4PABӧf^'B8 :`2W3U" P5< |/~;v]yJ2V>k^L` 6ƭ;33teu93s'1$42?ݶ ̥ nFJJ} 3ӡ-;u50xz ?$)LCbNu;%0{&GOwP#6q5٨3= !LxL1BvَMaPr?3zzz[JYR ,(҂0[XK|ҩ.Me-O%c Yw ݢ#p''tH_,8 +J l҅ !GDVئuC8UV;Jv`I/¥SnxԵbޯ|ZMT@7ǯT0o-Ńvl4^m=up>3XWXc^wnlGz!HHHiW}n$bjn$ֻնy0,KAUUyU[ZP Q;[r7X}pQsw~TE 8y7qN,)0멩rS=x^Ligt@oxN~םܹ\ZUک8Qtj)juͿe|/-g[;gII Nhٰq^ص#cN5Ή^ !L.A+EPK#aʐÄ|k=Yz( O ژq(Cn!!!1ٹ:HLNs9T_>6^y!r lK ,ĩKG=~#|ou{{Dz[؏ґ RxfNab78ʑQUPOvfǛn+mЏ-`:=//:: UCk$%q^sv3Q>kX|}y3k(nf4es&#HϽj|CZ01䄼 XM N -on^a&ZERlH z@S&&ݖ?)ݪEoF~`U{85 7#;ݾOJ7 \0ntr\7w7p3g\9izg R&]3qM-Cp6?"7%\BcYqQo֛bLqF{*o9˟Z 2s@>ƺpf7 ~ʆ&iUcɦoCdwp/ctQ#SYv[a«U`khYB<٦MN~unW7QnNi?qN{v||PMf,vTKr"psy6g嫛#o99zМڟ`]pp}ɅzC7d͟)!o{agG1l ;+A2}rܽ7K_Hhw<8ٹ?\嶯gCH3՝O˯,qlP_Q,'`FЬq֕'u=C8V÷Jo Pq/bUKܣܝ66C/rpKMVD[\Ҩ9m )|-l1Z*Qbq58mٷʸߡ}ݮnW`ewg2࿗)egʼnA|qc5ٿf*fH´=l$?kzK/WY"z82}}f'YysTe병yF:IȳNHy4-VCq ML㍙ݶڶ%Ji+_Vďc{lI,hh36,~ecGaU..ŕĤ@"PaH"S9qT#S)dCbq)1 S4?jqVD*Obw!RKChV"Q4,)Sj$%!(4!5H>j"X3#bck418j0$T"EvʠY\O`![]{_)$e-]s1MouXPQ>OÔt?6F9ƭ;ǩ6MsOMI9j>h `zP&l=ݔ q?}Gs3(T&EH$|De1әHS:ܤ6qM#N NoFUfSawdjX{{^i5UD9%HtirIjZ(!M^NCDI !1!IYD$"2I$!"4!:%[ oaR[@O~̮fnya[ic)8+iaόZxZԑPF+vcordV&]}w<}whCFD#C'3 'y pvwmw1ڈ5QmP9D&A~*%W\G-6nJτDfbĐ1tJK,51im(p&Gn\Tpl!yR{av@BK#l.ϔQh$(9EOʊ z2{8rwXy3pL.cŐK.ژ N0"x*֧*̓:R:;Uaqm#K0Ι'u]_{۶zמ$! |SukvZQ#K:QvovII,K ;ۙݝ$Sf>6FVa# )hYY\}fo0G#{ÔϕT$9 1- sQdnn˕m6V6 i" #CӜGkx^ %PH{tys3,A[T6 PƐ, Nlgf:i7G/EKuO*9ܓB+XG,H_1^M1b\ ޟ( DURޢ-UvSP p^G/2U7g*< +f ya@B OT1 |5̝(@^YI-%B%rIʋu t00H+ zy!Ѫnb3bbiSG2-d,fvZ:i$Na f6ϴ$9j5fɩQrEEh3%9qNpZդɸ1H4p5Jn`jѤIR`ሎ ٍZrV#ַFreW8#ٿOr#6sh^[0#ƖƲhNnj)9ZKkQKrt9 5ZbVVӎIVudU#ڄ'9nI:X5E=A 2"OyjڦXeR76jQ7 T8b#l(TS Va 6 B} 4ޜf̩Sruf1LPfcWmsnVK4UPAQ!9u`nȦL=3w&&_' F3N%bB"F/*dKtx"*;Fs}442uRSNZG;\Dz|;AN*y ?֬ed}`U.QMtP_GYów ^3)_ ,O)PNP+ddiRHh"K"QO@$')g[ &C&'a $9? `KIQ%L ||'MfLz:4oa>7K^jx2i|38Ѳ(m_w>~YJbS93_q5(/ߢI?'}ik %;Sӛ#O6 aHYGDtx8ĔYy,'˥wMtFdס0 xH2!C2[Vtoʻ_9r` D)Z~)Z; aSDFvScTO\IbLzK8xgđ@Q岟_čz9(4F FR/Iwֶ,>O+;c,@u4Au  zv7D9٤C r/ѩ4`$QMm`݅aI--qIEA(k0#0S}X&OE!X> <<8n5f94j FfM"EBr̒EI2I/iwPI˜ɜaIҝ+%Mx I pZLÉ|0'gZ3ӊߠdL9f}G6=+p_=Y_6j|+x;ҥ'x5-7bEFomV sҕwKByVf{ZՓQػSofKjj[,]0Qe޺x`2j·}N-=Xs:*HRxbyg y>}v)SAsåsu<79i)@\&c<#nDlO,SجJ'Qxh`Y1TǨ`M9t|am!R<_J`т>_⍿|oцoLr44Fvqq#<]y~{GkBI,=TE(Szǩ&10I*)Vi`qgTM-9ƓPe&DW) +{IJ)qe4ѝp,@i($q/7E@3V妨 d+۠.{j47!GL}<|$DDGXZ"dsJ CG#M=sT1pjӣoz{J^*ޥ5Co!B_[St0ƨaV55kR`tg?XY4 L7wÏCC(aL0X.JwRkxݺεo0)p6׺WKZa~q.GVwuZ&*hNRJ[dk7^$o#!3%&0MxUrdH"z~7 C-eu,ro/J 49|x>\'?;Ak6F7W?/.ot>M9%Cxxg߼}дgӦ3/I 6X/-(wVTAHoϣTN}KKe']DiC۩LwD ;6{`6gc3!K|SտOO//.2/HO}ooNzRw; 10231ms (15:54:16.354) Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[1482175469]: [10.231345087s] [10.231345087s] END Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.355131 4808 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.360258 4808 trace.go:236] Trace[142975957]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:05.500) (total time: 10859ms): Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[142975957]: ---"Objects listed" error: 10859ms (15:54:16.360) Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[142975957]: [10.859391329s] [10.859391329s] END Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.360294 4808 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.360414 4808 trace.go:236] Trace[643777040]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:04.236) (total time: 12123ms): Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[643777040]: ---"Objects listed" error: 12123ms (15:54:16.360) Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[643777040]: [12.123873161s] [12.123873161s] END Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.360446 4808 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.362401 4808 trace.go:236] Trace[1570336420]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (17-Feb-2026 15:54:03.653) (total time: 12708ms): Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[1570336420]: ---"Objects listed" error: 12708ms (15:54:16.362) Feb 17 15:54:16 crc kubenswrapper[4808]: Trace[1570336420]: [12.708753128s] [12.708753128s] END Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.362431 4808 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:16 crc kubenswrapper[4808]: E0217 15:54:16.365010 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.366558 4808 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.370633 4808 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.406693 4808 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55868->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.406788 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55868->192.168.126.11:17697: read: connection reset by peer" Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.417228 4808 csr.go:261] certificate signing request csr-2cnbv is approved, waiting to be issued Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.429709 4808 csr.go:257] certificate signing request csr-2cnbv is issued Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.656718 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.657547 4808 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.657663 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.661526 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:16 crc kubenswrapper[4808]: I0217 15:54:16.951966 4808 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 17 15:54:16 crc kubenswrapper[4808]: W0217 15:54:16.952471 4808 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 15:54:16 crc kubenswrapper[4808]: W0217 15:54:16.952541 4808 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 15:54:16 crc kubenswrapper[4808]: W0217 15:54:16.952496 4808 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 15:54:16 crc kubenswrapper[4808]: E0217 15:54:16.952555 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": read tcp 38.102.83.64:57292->38.102.83.64:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-apiserver-crc.189513a749c78f92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:53:57.65843957 +0000 UTC m=+1.174798643,LastTimestamp:2026-02-17 15:53:57.65843957 +0000 UTC m=+1.174798643,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:54:16 crc kubenswrapper[4808]: W0217 15:54:16.952496 4808 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.087839 4808 apiserver.go:52] "Watching apiserver" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.093126 4808 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.093489 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-f8pfh","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.093853 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.093897 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.093970 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.094034 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.094163 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.094296 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.094403 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.094460 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.094591 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.094732 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.097522 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:34:15.995814718 +0000 UTC Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.097808 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.099937 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.100103 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.100208 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.100515 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.100634 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.100680 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.101139 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.101474 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.101480 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.106933 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.111948 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.126031 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.152522 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.167646 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.181224 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.182948 4808 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.190166 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.199903 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.209543 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.219926 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.233934 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.241549 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.255545 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.265658 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.265731 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.267930 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df" exitCode=255 Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.267972 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df"} Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272241 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272289 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272318 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272348 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272379 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272409 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272437 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272463 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272492 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272520 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272547 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272596 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272622 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272674 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272833 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.273126 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.273136 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.272654 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274464 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274509 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274526 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274543 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274632 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274679 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274710 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274737 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274767 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274763 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274799 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274831 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274860 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274943 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275233 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275131 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275238 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275259 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275426 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275463 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275617 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275751 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275957 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.275981 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276070 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276250 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.274891 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276326 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276360 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276392 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276420 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276450 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276479 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276501 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276529 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276558 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276603 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276627 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276658 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276697 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276721 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276753 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276786 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276810 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276840 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276872 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276900 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276920 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276947 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276971 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276999 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277023 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277051 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277076 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277125 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277160 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277192 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277218 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277241 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277266 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277291 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277315 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277340 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277365 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277390 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.277396 4808 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278015 4808 scope.go:117] "RemoveContainer" containerID="68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.276282 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277544 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277623 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277890 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277988 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278009 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278018 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278206 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278314 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278303 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278567 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278535 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278634 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.277409 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278686 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278719 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278799 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278814 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278829 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278852 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278876 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278899 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278920 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278945 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278971 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278983 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278992 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.278995 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.279021 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.279050 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.279076 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.281459 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.281602 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.281654 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.281885 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282058 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282207 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282293 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282365 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282376 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282522 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.282623 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:54:17.782523741 +0000 UTC m=+21.298882814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282672 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282715 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282703 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.282850 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.283430 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.283658 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.283683 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.283883 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.283910 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.283966 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.284906 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.284958 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285122 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285284 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285328 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285521 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285560 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285621 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285565 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285649 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285670 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285694 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285717 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285740 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285761 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285791 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285814 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285833 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285853 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287067 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287121 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287153 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287183 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287212 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287239 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287267 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287293 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287322 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287352 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287382 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287407 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287434 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287460 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287486 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287512 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287537 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287563 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287608 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287634 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287675 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287703 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287728 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287753 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287783 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287821 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287845 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287876 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287899 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287926 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287950 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287974 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288075 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288107 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288199 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288231 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288265 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288294 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288326 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288362 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288388 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288414 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288444 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288469 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288497 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288526 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288552 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288599 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288634 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288670 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288705 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288743 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289802 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289856 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289882 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289909 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289937 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289963 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289988 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290016 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290046 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290076 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290107 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291684 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291770 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291863 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291939 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292020 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292091 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292167 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292266 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292362 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292497 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292610 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292683 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292758 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292880 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293002 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293110 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293230 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293296 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293362 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293432 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293503 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293568 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293658 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293740 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293817 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293889 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293972 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294059 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294129 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294223 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294301 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294370 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294471 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294553 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294721 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294823 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294901 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294968 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295058 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295170 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295273 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295367 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295502 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkcvd\" (UniqueName: \"kubernetes.io/projected/13cb51e0-9eb4-4948-a9bf-93cddaa429fe-kube-api-access-mkcvd\") pod \"node-resolver-f8pfh\" (UID: \"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\") " pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295655 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295740 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295916 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.296154 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285754 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.285748 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.286425 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.286457 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.286788 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287008 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287125 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287197 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287243 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287535 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287549 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287612 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.286835 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.287637 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288016 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288058 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288704 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288868 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.288893 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289285 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289265 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289413 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289684 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289732 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.289988 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290530 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290553 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290607 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.290621 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291005 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291096 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291160 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.291759 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292163 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292482 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.292991 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293094 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293545 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.293836 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294003 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294086 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294656 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294496 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294948 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.294990 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295404 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.295621 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.296089 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.296267 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.296646 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.296303 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13cb51e0-9eb4-4948-a9bf-93cddaa429fe-hosts-file\") pod \"node-resolver-f8pfh\" (UID: \"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\") " pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297358 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297368 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297407 4808 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297437 4808 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297459 4808 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297480 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297500 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297524 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297544 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297564 4808 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297611 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297631 4808 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297650 4808 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297669 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297692 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297713 4808 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297732 4808 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297751 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297771 4808 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297790 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297808 4808 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297825 4808 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297844 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297862 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297880 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297898 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297916 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297935 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297952 4808 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297969 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.297986 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298003 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298020 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298040 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298061 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298078 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298231 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298254 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298272 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298291 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298309 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298325 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298341 4808 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298360 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298378 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298410 4808 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298429 4808 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298446 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298465 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298483 4808 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298506 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298524 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298544 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298562 4808 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298648 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298670 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298691 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298708 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298725 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298742 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298757 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298772 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298790 4808 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298809 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298824 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298838 4808 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298854 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298868 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298887 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298902 4808 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298918 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298932 4808 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298946 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298961 4808 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298976 4808 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298991 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299006 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299022 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299040 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299056 4808 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299073 4808 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299094 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299113 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299131 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299153 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299174 4808 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299192 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299210 4808 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299229 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299252 4808 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299271 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299292 4808 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299311 4808 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299332 4808 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299348 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299365 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299382 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299401 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299421 4808 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299440 4808 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299459 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299477 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299497 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298380 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.298675 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299001 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299372 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.299806 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.300048 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.300644 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.301914 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.302298 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.302925 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.303648 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.304097 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.304253 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.304356 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.304793 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.304965 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.305635 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.305706 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.305963 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.306225 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.306248 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.307229 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.307421 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.307616 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.307755 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.308346 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.310039 4808 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.310366 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.310541 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.310799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.310854 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.311385 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.311730 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.312024 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.312595 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.312920 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.316104 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:17.816081662 +0000 UTC m=+21.332440735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.314051 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.314524 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.315021 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.315310 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.315501 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.315726 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.315913 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.315948 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.316704 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.316790 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.318252 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.318443 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.318508 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:17.818493346 +0000 UTC m=+21.334852619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.318950 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.319951 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.323930 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.324275 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.324493 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.324685 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.324816 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.325363 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.325712 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.325955 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.326213 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.326345 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.326541 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.327020 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.327743 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.328039 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.328065 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.328081 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.328159 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:17.828135889 +0000 UTC m=+21.344494962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.328611 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.327721 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.329198 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.330615 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.330790 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.331853 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.331875 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.331887 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.331915 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.332111 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.331928 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:17.831916029 +0000 UTC m=+21.348275102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.332591 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.332624 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.333143 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.333325 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.333863 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.334203 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.334588 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.342819 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.343404 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.343765 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.344070 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.344185 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.344391 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.344549 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.344631 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.344810 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.345365 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.347544 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.348140 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.348737 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.349226 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.349373 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.349721 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.349754 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.349260 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.349980 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.349353 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.348947 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.350872 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.351826 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.355464 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.356567 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.361687 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.365299 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.367437 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.376079 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.380367 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.381377 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.391932 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.400025 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.400624 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.400752 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.400764 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.400901 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkcvd\" (UniqueName: \"kubernetes.io/projected/13cb51e0-9eb4-4948-a9bf-93cddaa429fe-kube-api-access-mkcvd\") pod \"node-resolver-f8pfh\" (UID: \"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\") " pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.400942 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13cb51e0-9eb4-4948-a9bf-93cddaa429fe-hosts-file\") pod \"node-resolver-f8pfh\" (UID: \"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\") " pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401113 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401134 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401145 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401155 4808 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401166 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401204 4808 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401215 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401226 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401236 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401247 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401257 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401270 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401280 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401309 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401320 4808 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401333 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401475 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401487 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401497 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401507 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401520 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401530 4808 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401542 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401554 4808 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401565 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401596 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401610 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401624 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401639 4808 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401653 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401671 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401684 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401697 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401711 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401726 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401738 4808 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401751 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401764 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401776 4808 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401789 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401802 4808 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401815 4808 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401828 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401840 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401853 4808 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401865 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401877 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401889 4808 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401901 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401950 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.401997 4808 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402050 4808 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402066 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402081 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402093 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402134 4808 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402150 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402154 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402162 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402224 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402242 4808 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402260 4808 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402274 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402286 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402299 4808 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402310 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402321 4808 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402331 4808 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402341 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402351 4808 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402360 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402370 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402379 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402390 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402399 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402409 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402418 4808 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402427 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402437 4808 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402449 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402458 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402467 4808 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402477 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402486 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402496 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402507 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402516 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402527 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402538 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402552 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402562 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402603 4808 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402614 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.402088 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13cb51e0-9eb4-4948-a9bf-93cddaa429fe-hosts-file\") pod \"node-resolver-f8pfh\" (UID: \"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\") " pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.407604 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.410036 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.413297 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.420618 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.421219 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkcvd\" (UniqueName: \"kubernetes.io/projected/13cb51e0-9eb4-4948-a9bf-93cddaa429fe-kube-api-access-mkcvd\") pod \"node-resolver-f8pfh\" (UID: \"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\") " pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.421357 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.430588 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-17 15:49:16 +0000 UTC, rotation deadline is 2026-12-15 14:46:04.347844208 +0000 UTC Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.430669 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7222h51m46.917177557s for next certificate rotation Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.434674 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-f8pfh" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.438044 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: W0217 15:54:17.453046 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-09a8e76b13bfe0e424e43bdf3538f2955a34754ff8ec198e8c5e5985d1232532 WatchSource:0}: Error finding container 09a8e76b13bfe0e424e43bdf3538f2955a34754ff8ec198e8c5e5985d1232532: Status 404 returned error can't find the container with id 09a8e76b13bfe0e424e43bdf3538f2955a34754ff8ec198e8c5e5985d1232532 Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.453623 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.464446 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: W0217 15:54:17.472978 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13cb51e0_9eb4_4948_a9bf_93cddaa429fe.slice/crio-8ae4359922daee2ca55f193b06acbd233caa4c9d5554f03b1d2c8adcd5ce6f20 WatchSource:0}: Error finding container 8ae4359922daee2ca55f193b06acbd233caa4c9d5554f03b1d2c8adcd5ce6f20: Status 404 returned error can't find the container with id 8ae4359922daee2ca55f193b06acbd233caa4c9d5554f03b1d2c8adcd5ce6f20 Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.530739 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-pr5s4"] Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.531627 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.538690 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.538970 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.539109 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.546659 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.557950 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.571099 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.587179 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.599039 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.605979 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2xc9\" (UniqueName: \"kubernetes.io/projected/a4989dd6-5d44-42b5-882c-12a10ffc7911-kube-api-access-q2xc9\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.606077 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a4989dd6-5d44-42b5-882c-12a10ffc7911-serviceca\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.606108 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4989dd6-5d44-42b5-882c-12a10ffc7911-host\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.614428 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.629939 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.653311 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.669083 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.686466 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.707016 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2xc9\" (UniqueName: \"kubernetes.io/projected/a4989dd6-5d44-42b5-882c-12a10ffc7911-kube-api-access-q2xc9\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.707082 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a4989dd6-5d44-42b5-882c-12a10ffc7911-serviceca\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.707123 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4989dd6-5d44-42b5-882c-12a10ffc7911-host\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.707208 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a4989dd6-5d44-42b5-882c-12a10ffc7911-host\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.708358 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a4989dd6-5d44-42b5-882c-12a10ffc7911-serviceca\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.725341 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2xc9\" (UniqueName: \"kubernetes.io/projected/a4989dd6-5d44-42b5-882c-12a10ffc7911-kube-api-access-q2xc9\") pod \"node-ca-pr5s4\" (UID: \"a4989dd6-5d44-42b5-882c-12a10ffc7911\") " pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.807799 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.807976 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:54:18.807951508 +0000 UTC m=+22.324310581 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.864664 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pr5s4" Feb 17 15:54:17 crc kubenswrapper[4808]: W0217 15:54:17.877637 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4989dd6_5d44_42b5_882c_12a10ffc7911.slice/crio-a8571625297a6141427edbdaaf78be54f992f7edd1a9da1421d2de90b9f4bdc2 WatchSource:0}: Error finding container a8571625297a6141427edbdaaf78be54f992f7edd1a9da1421d2de90b9f4bdc2: Status 404 returned error can't find the container with id a8571625297a6141427edbdaaf78be54f992f7edd1a9da1421d2de90b9f4bdc2 Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.908985 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.909048 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.909082 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:17 crc kubenswrapper[4808]: I0217 15:54:17.909114 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909236 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909266 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909265 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909341 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:18.9093211 +0000 UTC m=+22.425680173 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909394 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:18.909364451 +0000 UTC m=+22.425723544 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909266 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909434 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909452 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909292 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909482 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909529 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:18.909508035 +0000 UTC m=+22.425867108 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:17 crc kubenswrapper[4808]: E0217 15:54:17.909549 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:18.909540686 +0000 UTC m=+22.425899759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.097801 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 10:08:05.994079398 +0000 UTC Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.274014 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.276368 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.276843 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.278030 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-f8pfh" event={"ID":"13cb51e0-9eb4-4948-a9bf-93cddaa429fe","Type":"ContainerStarted","Data":"e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.278085 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-f8pfh" event={"ID":"13cb51e0-9eb4-4948-a9bf-93cddaa429fe","Type":"ContainerStarted","Data":"8ae4359922daee2ca55f193b06acbd233caa4c9d5554f03b1d2c8adcd5ce6f20"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.279403 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"09a8e76b13bfe0e424e43bdf3538f2955a34754ff8ec198e8c5e5985d1232532"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.282153 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.282181 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"84eb93f311bf1bd277aed541552b61365df084f2986d8df7dc489002bdb980cd"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.283897 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pr5s4" event={"ID":"a4989dd6-5d44-42b5-882c-12a10ffc7911","Type":"ContainerStarted","Data":"228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.283921 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pr5s4" event={"ID":"a4989dd6-5d44-42b5-882c-12a10ffc7911","Type":"ContainerStarted","Data":"a8571625297a6141427edbdaaf78be54f992f7edd1a9da1421d2de90b9f4bdc2"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.285662 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.285698 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.285714 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"110706b24914ccd13caa26782092eec6177d2477667e6ad2b4c66eb04823a4ee"} Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.292507 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.301068 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-kx4nl"] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.301708 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-msgfd"] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.301930 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.302308 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.303287 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-k8v8k"] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.303585 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.304471 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.304698 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.304865 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.304980 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.305139 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.305242 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.305338 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.308009 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.308758 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.309182 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.309384 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.309539 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.328165 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.343345 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.361527 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.383118 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.400514 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.412943 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/18916d6d-e063-40a0-816f-554f95cd2956-cni-binary-copy\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413020 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a6c9480c-4161-4c38-bec1-0822c6692f6e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413051 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-cni-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413092 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-k8s-cni-cncf-io\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413110 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-cnibin\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413129 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-os-release\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413164 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-netns\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413186 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-multus-certs\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413205 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-system-cni-dir\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413244 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-socket-dir-parent\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-etc-kubernetes\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413287 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca38b6e7-b21c-453d-8b6c-a163dac84b35-proxy-tls\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413323 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm52q\" (UniqueName: \"kubernetes.io/projected/ca38b6e7-b21c-453d-8b6c-a163dac84b35-kube-api-access-bm52q\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413445 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-system-cni-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413513 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-kubelet\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413553 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmn2s\" (UniqueName: \"kubernetes.io/projected/18916d6d-e063-40a0-816f-554f95cd2956-kube-api-access-qmn2s\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413611 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-cni-multus\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413697 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/18916d6d-e063-40a0-816f-554f95cd2956-multus-daemon-config\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413765 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca38b6e7-b21c-453d-8b6c-a163dac84b35-mcd-auth-proxy-config\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413803 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-cnibin\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413836 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t282\" (UniqueName: \"kubernetes.io/projected/a6c9480c-4161-4c38-bec1-0822c6692f6e-kube-api-access-7t282\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413869 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-cni-bin\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413898 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-os-release\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413922 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-hostroot\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413962 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-conf-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.413989 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a6c9480c-4161-4c38-bec1-0822c6692f6e-cni-binary-copy\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.414043 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.414102 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ca38b6e7-b21c-453d-8b6c-a163dac84b35-rootfs\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.415119 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.429394 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.441099 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.455042 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.468089 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.485988 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.498042 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.510883 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.515715 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/18916d6d-e063-40a0-816f-554f95cd2956-cni-binary-copy\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.515782 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a6c9480c-4161-4c38-bec1-0822c6692f6e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.515838 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-cni-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.515867 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-k8s-cni-cncf-io\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.515918 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-os-release\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.515948 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-cnibin\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516001 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-netns\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516069 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-k8s-cni-cncf-io\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516122 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-netns\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516105 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-cnibin\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516200 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-multus-certs\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516249 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-system-cni-dir\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516307 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-etc-kubernetes\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516298 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-run-multus-certs\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516348 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-os-release\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516374 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca38b6e7-b21c-453d-8b6c-a163dac84b35-proxy-tls\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516324 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-system-cni-dir\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516384 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-cni-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516412 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm52q\" (UniqueName: \"kubernetes.io/projected/ca38b6e7-b21c-453d-8b6c-a163dac84b35-kube-api-access-bm52q\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516399 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-etc-kubernetes\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516477 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-socket-dir-parent\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516553 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-socket-dir-parent\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516592 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-system-cni-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516634 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-system-cni-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516651 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/18916d6d-e063-40a0-816f-554f95cd2956-cni-binary-copy\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516654 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-kubelet\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516685 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-kubelet\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516707 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmn2s\" (UniqueName: \"kubernetes.io/projected/18916d6d-e063-40a0-816f-554f95cd2956-kube-api-access-qmn2s\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516737 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-cni-multus\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516757 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/18916d6d-e063-40a0-816f-554f95cd2956-multus-daemon-config\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516775 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca38b6e7-b21c-453d-8b6c-a163dac84b35-mcd-auth-proxy-config\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516791 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-cnibin\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516808 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-cni-multus\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516810 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t282\" (UniqueName: \"kubernetes.io/projected/a6c9480c-4161-4c38-bec1-0822c6692f6e-kube-api-access-7t282\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516854 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-cnibin\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516876 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-cni-bin\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516908 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-os-release\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516927 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/a6c9480c-4161-4c38-bec1-0822c6692f6e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516959 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-host-var-lib-cni-bin\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516966 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-hostroot\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.516937 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-hostroot\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517013 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-os-release\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517051 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-conf-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517118 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a6c9480c-4161-4c38-bec1-0822c6692f6e-cni-binary-copy\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517160 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517201 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ca38b6e7-b21c-453d-8b6c-a163dac84b35-rootfs\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517122 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/18916d6d-e063-40a0-816f-554f95cd2956-multus-conf-dir\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517313 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ca38b6e7-b21c-453d-8b6c-a163dac84b35-rootfs\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517465 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/18916d6d-e063-40a0-816f-554f95cd2956-multus-daemon-config\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517672 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a6c9480c-4161-4c38-bec1-0822c6692f6e-cni-binary-copy\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517674 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca38b6e7-b21c-453d-8b6c-a163dac84b35-mcd-auth-proxy-config\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.517762 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a6c9480c-4161-4c38-bec1-0822c6692f6e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.526186 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca38b6e7-b21c-453d-8b6c-a163dac84b35-proxy-tls\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.526354 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.538184 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t282\" (UniqueName: \"kubernetes.io/projected/a6c9480c-4161-4c38-bec1-0822c6692f6e-kube-api-access-7t282\") pod \"multus-additional-cni-plugins-kx4nl\" (UID: \"a6c9480c-4161-4c38-bec1-0822c6692f6e\") " pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.538491 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm52q\" (UniqueName: \"kubernetes.io/projected/ca38b6e7-b21c-453d-8b6c-a163dac84b35-kube-api-access-bm52q\") pod \"machine-config-daemon-k8v8k\" (UID: \"ca38b6e7-b21c-453d-8b6c-a163dac84b35\") " pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.545532 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.550186 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmn2s\" (UniqueName: \"kubernetes.io/projected/18916d6d-e063-40a0-816f-554f95cd2956-kube-api-access-qmn2s\") pod \"multus-msgfd\" (UID: \"18916d6d-e063-40a0-816f-554f95cd2956\") " pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.566070 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.581480 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.593766 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.606063 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.616545 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.656067 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-msgfd" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.669182 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" Feb 17 15:54:18 crc kubenswrapper[4808]: W0217 15:54:18.673715 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18916d6d_e063_40a0_816f_554f95cd2956.slice/crio-0830da4e22ca1f08d719d050f54327f8d31a2fd2b5efe349b722bc7cea49785d WatchSource:0}: Error finding container 0830da4e22ca1f08d719d050f54327f8d31a2fd2b5efe349b722bc7cea49785d: Status 404 returned error can't find the container with id 0830da4e22ca1f08d719d050f54327f8d31a2fd2b5efe349b722bc7cea49785d Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.676432 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:54:18 crc kubenswrapper[4808]: W0217 15:54:18.700969 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6c9480c_4161_4c38_bec1_0822c6692f6e.slice/crio-7b2d0c263fd8165a5a56a6c8d7a691d79a6bf709c4bbd0f10203b50e2ce86215 WatchSource:0}: Error finding container 7b2d0c263fd8165a5a56a6c8d7a691d79a6bf709c4bbd0f10203b50e2ce86215: Status 404 returned error can't find the container with id 7b2d0c263fd8165a5a56a6c8d7a691d79a6bf709c4bbd0f10203b50e2ce86215 Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.702671 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tgvlh"] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.704325 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.708190 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.709238 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.709724 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.711133 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.711544 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.711705 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.713417 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.728996 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.741936 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.762096 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.800142 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.819819 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.819960 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:54:20.819941031 +0000 UTC m=+24.336300104 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.819990 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-ovn\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820016 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748f02a-e3dd-47c7-b89d-b472c718e593-ovn-node-metrics-cert\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820031 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-systemd\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820047 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820063 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820109 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-netd\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820132 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-script-lib\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820155 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-ovn-kubernetes\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820175 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-etc-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820326 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-var-lib-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820419 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-config\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820497 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-bin\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820549 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-node-log\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820619 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-log-socket\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820663 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnzj8\" (UniqueName: \"kubernetes.io/projected/5748f02a-e3dd-47c7-b89d-b472c718e593-kube-api-access-qnzj8\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820715 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-kubelet\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820755 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-systemd-units\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820787 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-netns\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820828 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-slash\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.820866 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-env-overrides\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.842544 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.884723 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.895305 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.912885 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922386 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-kubelet\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922452 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-systemd-units\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922547 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-netns\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922582 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-slash\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922604 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-env-overrides\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922629 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922659 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922681 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922699 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-ovn\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922718 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748f02a-e3dd-47c7-b89d-b472c718e593-ovn-node-metrics-cert\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922738 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-systemd\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922761 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922783 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922807 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-ovn-kubernetes\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922830 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-netd\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922850 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-script-lib\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922869 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-etc-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922889 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922912 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-var-lib-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922936 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-bin\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922956 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-config\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922974 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-node-log\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.922992 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-log-socket\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923008 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnzj8\" (UniqueName: \"kubernetes.io/projected/5748f02a-e3dd-47c7-b89d-b472c718e593-kube-api-access-qnzj8\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.923231 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.923305 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:20.923286035 +0000 UTC m=+24.439645098 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.923321 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923343 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923383 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-ovn-kubernetes\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923411 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-netd\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.923417 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:20.923394648 +0000 UTC m=+24.439753721 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923471 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-kubelet\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923535 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-systemd-units\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923562 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-netns\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923619 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-slash\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923760 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-etc-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923803 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-node-log\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923927 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-systemd\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923957 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.923979 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-ovn\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924042 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924061 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924076 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.924112 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-log-socket\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.924112 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-bin\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924252 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:20.924222909 +0000 UTC m=+24.440581982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.924297 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-var-lib-openvswitch\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924349 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924371 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924383 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:18 crc kubenswrapper[4808]: E0217 15:54:18.924492 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:20.924468836 +0000 UTC m=+24.440827909 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.924780 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-script-lib\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.924827 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-env-overrides\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.926043 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-config\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.932494 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.947814 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.960020 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.975111 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748f02a-e3dd-47c7-b89d-b472c718e593-ovn-node-metrics-cert\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.974965 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.975498 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnzj8\" (UniqueName: \"kubernetes.io/projected/5748f02a-e3dd-47c7-b89d-b472c718e593-kube-api-access-qnzj8\") pod \"ovnkube-node-tgvlh\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:18 crc kubenswrapper[4808]: I0217 15:54:18.989947 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.019330 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:19 crc kubenswrapper[4808]: W0217 15:54:19.033386 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5748f02a_e3dd_47c7_b89d_b472c718e593.slice/crio-ad60f37f93ae8b251f62c5805faa94eb63cd424e9052d1f8a1dad95e11326ec9 WatchSource:0}: Error finding container ad60f37f93ae8b251f62c5805faa94eb63cd424e9052d1f8a1dad95e11326ec9: Status 404 returned error can't find the container with id ad60f37f93ae8b251f62c5805faa94eb63cd424e9052d1f8a1dad95e11326ec9 Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.098320 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 15:20:53.965851242 +0000 UTC Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.145803 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.145858 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.145941 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:19 crc kubenswrapper[4808]: E0217 15:54:19.145993 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:19 crc kubenswrapper[4808]: E0217 15:54:19.146123 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:19 crc kubenswrapper[4808]: E0217 15:54:19.146233 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.150225 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.151087 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.152427 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.153190 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.154235 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.154819 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.155484 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.156676 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.157432 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.158361 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.158914 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.159989 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.160522 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.161034 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.161975 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.162476 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.163512 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.163952 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.164518 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.165538 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.166156 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.167108 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.167536 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.168531 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.168989 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.169749 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.170872 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.171379 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.172368 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.172863 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.173712 4808 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.173822 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.175395 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.176311 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.176771 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.178263 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.178923 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.179875 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.180635 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.181865 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.182343 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.183387 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.184021 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.185023 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.185469 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.186342 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.186905 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.187954 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.188443 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.190071 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.190531 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.191514 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.192157 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.192630 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.290627 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.290681 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.290700 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"e07e40ad4d38873b67ba6ba5a9d61cab8dd149e8e9c16cd0656006595f3789f3"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.292473 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerStarted","Data":"d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.292532 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerStarted","Data":"0830da4e22ca1f08d719d050f54327f8d31a2fd2b5efe349b722bc7cea49785d"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.294679 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437" exitCode=0 Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.294729 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.294750 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"ad60f37f93ae8b251f62c5805faa94eb63cd424e9052d1f8a1dad95e11326ec9"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.297494 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerStarted","Data":"7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.297532 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerStarted","Data":"7b2d0c263fd8165a5a56a6c8d7a691d79a6bf709c4bbd0f10203b50e2ce86215"} Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.306446 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.323727 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.341428 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.353473 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.366943 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.389865 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.414227 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.430308 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.443079 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.463693 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.478469 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.496325 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.514766 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.530098 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.543756 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.560560 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.586655 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.601593 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.620305 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.640282 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.661401 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.703396 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.743281 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.780396 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.819849 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:19 crc kubenswrapper[4808]: I0217 15:54:19.857999 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.099160 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 07:32:01.967522047 +0000 UTC Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.304976 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.305028 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.305041 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.305068 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.305079 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.308015 4808 generic.go:334] "Generic (PLEG): container finished" podID="a6c9480c-4161-4c38-bec1-0822c6692f6e" containerID="7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d" exitCode=0 Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.308048 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerDied","Data":"7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d"} Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.327493 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.347959 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.367074 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.385214 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.406525 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.422474 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.441123 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.467388 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.483847 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.496999 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.517525 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.539785 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.555691 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.704229 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.710748 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.714702 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.720216 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.737169 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.753485 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.766401 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.785722 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.816625 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.833203 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.843968 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.845915 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.845983 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:54:24.845959458 +0000 UTC m=+28.362318541 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.862198 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.876653 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.891962 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.903521 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.922538 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.949771 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.949846 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.949891 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.949934 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950055 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950118 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:24.95010014 +0000 UTC m=+28.466459213 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950238 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950264 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950279 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950314 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:24.950300315 +0000 UTC m=+28.466659378 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950381 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950398 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950408 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950439 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:24.950431979 +0000 UTC m=+28.466791052 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950510 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: E0217 15:54:20.950542 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:24.950529131 +0000 UTC m=+28.466888204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.967603 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:20 crc kubenswrapper[4808]: I0217 15:54:20.999489 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.041865 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.086763 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.100053 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:32:53.20709655 +0000 UTC Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.120159 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.145435 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.145491 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.145438 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:21 crc kubenswrapper[4808]: E0217 15:54:21.145648 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:21 crc kubenswrapper[4808]: E0217 15:54:21.145768 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:21 crc kubenswrapper[4808]: E0217 15:54:21.145914 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.160033 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.200820 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.238984 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.281604 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.317176 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.320342 4808 generic.go:334] "Generic (PLEG): container finished" podID="a6c9480c-4161-4c38-bec1-0822c6692f6e" containerID="b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44" exitCode=0 Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.320439 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerDied","Data":"b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44"} Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.322361 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04"} Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.335259 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.367748 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.405634 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.440823 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.484162 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.523736 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.560792 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.603748 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.642567 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.686182 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.721436 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.758151 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.806170 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.839938 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.885285 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.919848 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.960342 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:21 crc kubenswrapper[4808]: I0217 15:54:21.997791 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:21Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.037793 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.101086 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 08:50:11.523443574 +0000 UTC Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.329385 4808 generic.go:334] "Generic (PLEG): container finished" podID="a6c9480c-4161-4c38-bec1-0822c6692f6e" containerID="26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b" exitCode=0 Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.329489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerDied","Data":"26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b"} Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.352958 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.378405 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.404270 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.427358 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.475294 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.492169 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.510757 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.528060 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.544135 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.562451 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.575931 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.593912 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.611467 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.637073 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.765954 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.768426 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.768478 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.768489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.768623 4808 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.777500 4808 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.777799 4808 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.778993 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.779020 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.779030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.779042 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.779057 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:22Z","lastTransitionTime":"2026-02-17T15:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:22 crc kubenswrapper[4808]: E0217 15:54:22.799389 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.803507 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.803539 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.803552 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.803564 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.803591 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:22Z","lastTransitionTime":"2026-02-17T15:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:22 crc kubenswrapper[4808]: E0217 15:54:22.827436 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.834513 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.834613 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.834631 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.834659 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.834677 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:22Z","lastTransitionTime":"2026-02-17T15:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:22 crc kubenswrapper[4808]: E0217 15:54:22.849967 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.854846 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.854952 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.855026 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.855109 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.855236 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:22Z","lastTransitionTime":"2026-02-17T15:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:22 crc kubenswrapper[4808]: E0217 15:54:22.868406 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.873475 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.873516 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.873525 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.873542 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.873557 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:22Z","lastTransitionTime":"2026-02-17T15:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:22 crc kubenswrapper[4808]: E0217 15:54:22.886472 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:22Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:22 crc kubenswrapper[4808]: E0217 15:54:22.886757 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.888848 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.888880 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.888891 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.888907 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.888918 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:22Z","lastTransitionTime":"2026-02-17T15:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.992142 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.992199 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.992211 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.992230 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:22 crc kubenswrapper[4808]: I0217 15:54:22.992247 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:22Z","lastTransitionTime":"2026-02-17T15:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.095637 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.095678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.095689 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.095708 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.095720 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.101487 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:38:15.710958583 +0000 UTC Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.147381 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:23 crc kubenswrapper[4808]: E0217 15:54:23.147505 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.147885 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:23 crc kubenswrapper[4808]: E0217 15:54:23.147964 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.148025 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:23 crc kubenswrapper[4808]: E0217 15:54:23.148072 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.198957 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.199010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.199019 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.199036 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.199050 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.302271 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.302350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.302371 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.302399 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.302422 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.335929 4808 generic.go:334] "Generic (PLEG): container finished" podID="a6c9480c-4161-4c38-bec1-0822c6692f6e" containerID="43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8" exitCode=0 Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.336008 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerDied","Data":"43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.345764 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.355200 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.375945 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.396994 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.405735 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.405840 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.405868 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.405909 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.405942 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.422034 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.440169 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.455447 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.474559 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.498888 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.509563 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.509625 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.509640 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.509664 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.509682 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.517600 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.532100 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.545531 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.566070 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.583599 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.603891 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:23Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.613119 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.613179 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.613200 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.613229 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.613249 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.716075 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.716140 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.716159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.716187 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.716206 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.821251 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.821330 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.821354 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.821388 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.821415 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.892901 4808 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.928210 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.928284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.928303 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.928336 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:23 crc kubenswrapper[4808]: I0217 15:54:23.928356 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:23Z","lastTransitionTime":"2026-02-17T15:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.032374 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.032434 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.032451 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.032478 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.032499 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.102647 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:16:25.944252745 +0000 UTC Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.136848 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.137223 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.137472 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.137666 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.137825 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.241178 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.241264 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.241285 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.241314 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.241334 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.345670 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.346167 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.346187 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.346215 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.346236 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.355386 4808 generic.go:334] "Generic (PLEG): container finished" podID="a6c9480c-4161-4c38-bec1-0822c6692f6e" containerID="4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c" exitCode=0 Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.355704 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerDied","Data":"4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.384964 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.410232 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.442243 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.455720 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.455776 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.455799 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.455828 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.455849 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.466453 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.482967 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.508430 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.536652 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.558240 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.558470 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.558488 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.558496 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.558512 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.558524 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.572844 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.586911 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.603591 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.625765 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.641655 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.662666 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:24Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.663206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.663264 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.663280 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.663308 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.663325 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.766707 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.767131 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.767215 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.767327 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.767434 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.880055 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.880103 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.880112 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.880130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.880144 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.895048 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.895378 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:54:32.895325552 +0000 UTC m=+36.411684655 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.983668 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.983741 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.983762 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.983798 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.983823 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:24Z","lastTransitionTime":"2026-02-17T15:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.992463 4808 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.996260 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.996354 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.996437 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:24 crc kubenswrapper[4808]: I0217 15:54:24.996501 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996554 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996604 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996618 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996653 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996683 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996688 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996721 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996727 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996693 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:32.996669892 +0000 UTC m=+36.513028965 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996815 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:32.996781834 +0000 UTC m=+36.513141087 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996850 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:32.996830396 +0000 UTC m=+36.513189689 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:24 crc kubenswrapper[4808]: E0217 15:54:24.996904 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:32.996885307 +0000 UTC m=+36.513244630 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.086505 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.086547 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.086555 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.086592 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.086602 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.104294 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:47:39.898735694 +0000 UTC Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.145656 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.145714 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:25 crc kubenswrapper[4808]: E0217 15:54:25.145831 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:25 crc kubenswrapper[4808]: E0217 15:54:25.145926 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.146045 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:25 crc kubenswrapper[4808]: E0217 15:54:25.146217 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.190622 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.190675 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.190700 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.190722 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.190735 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.294623 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.294660 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.294668 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.294681 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.294691 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.373605 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.374606 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.374808 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.374889 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.385353 4808 generic.go:334] "Generic (PLEG): container finished" podID="a6c9480c-4161-4c38-bec1-0822c6692f6e" containerID="89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4" exitCode=0 Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.385432 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerDied","Data":"89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.399346 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.400624 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.400665 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.400685 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.400718 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.400739 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.433667 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.433757 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.434537 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.462321 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.483160 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.506907 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.507016 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.507038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.507072 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.507092 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.511324 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.532919 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.555492 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.576474 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.593441 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.610836 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.610877 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.610889 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.610904 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.610915 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.613413 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.635325 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.652974 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.669065 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.690682 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.710026 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.713956 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.714021 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.714042 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.714069 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.714089 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.729269 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.750282 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.766491 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.789011 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.810704 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.817987 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.818063 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.818202 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.818235 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.818256 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.831555 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.850425 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.868945 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.884850 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.902159 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.917471 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.921783 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.921840 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.921858 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.921891 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.921912 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:25Z","lastTransitionTime":"2026-02-17T15:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.939708 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:25 crc kubenswrapper[4808]: I0217 15:54:25.954509 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:25Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.025016 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.025095 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.025119 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.025152 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.025175 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.105308 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:10:02.283555141 +0000 UTC Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.128130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.128204 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.128223 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.128254 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.128278 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.201131 4808 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.231812 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.231882 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.231901 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.231931 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.231959 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.335842 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.335921 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.335940 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.335968 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.335989 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.396459 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" event={"ID":"a6c9480c-4161-4c38-bec1-0822c6692f6e","Type":"ContainerStarted","Data":"53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.418415 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.438847 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.439144 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.439196 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.439213 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.439239 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.439260 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.462557 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.481348 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.499273 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.518084 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.538420 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.548702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.548803 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.548832 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.548872 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.548912 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.566637 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.592952 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.626888 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.655002 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.655052 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.655071 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.655101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.655121 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.655941 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.677442 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.696711 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.723157 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:26Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.758923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.758995 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.759010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.759032 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.759047 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.862431 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.862501 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.862517 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.862549 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.862587 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.965500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.965631 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.965658 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.965698 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:26 crc kubenswrapper[4808]: I0217 15:54:26.965726 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:26Z","lastTransitionTime":"2026-02-17T15:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.068743 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.068779 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.068790 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.068812 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.068826 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.105717 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 08:14:10.322190307 +0000 UTC Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.146064 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:27 crc kubenswrapper[4808]: E0217 15:54:27.146230 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.146618 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:27 crc kubenswrapper[4808]: E0217 15:54:27.146694 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.146766 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:27 crc kubenswrapper[4808]: E0217 15:54:27.146826 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.171147 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.171193 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.171207 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.171233 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.171249 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.172310 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.194525 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.205792 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.218902 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.236031 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.255951 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.274106 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.274138 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.274149 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.274163 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.274173 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.274342 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.288636 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.315727 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.343351 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.358483 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.375602 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.377009 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.377052 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.377065 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.377085 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.377099 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.390388 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.404507 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:27Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.480935 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.480998 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.481018 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.481041 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.481057 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.584219 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.584263 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.584274 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.584290 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.584304 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.686873 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.686943 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.686961 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.686989 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.687009 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.790353 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.790423 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.790442 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.790472 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.790491 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.835121 4808 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.892968 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.893036 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.893061 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.893088 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.893109 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.997164 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.997479 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.997496 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.997514 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:27 crc kubenswrapper[4808]: I0217 15:54:27.997525 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:27Z","lastTransitionTime":"2026-02-17T15:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.100854 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.100925 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.100947 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.100976 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.100998 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.106183 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:34:49.865572005 +0000 UTC Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.204277 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.204339 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.204358 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.204386 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.204408 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.308199 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.308424 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.308513 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.308609 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.308686 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.407883 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/0.log" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.410316 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.410444 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.410542 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.410660 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.410730 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.413294 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7" exitCode=1 Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.413384 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.414225 4808 scope.go:117] "RemoveContainer" containerID="84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.431969 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.446273 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.460799 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.488896 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:27Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823454 6099 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:54:27.823478 6099 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:54:27.823501 6099 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:54:27.823550 6099 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:54:27.823566 6099 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:54:27.823609 6099 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:54:27.823712 6099 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823793 6099 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:54:27.823869 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:54:27.823886 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:54:27.823927 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:54:27.823948 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:54:27.823967 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:54:27.824263 6099 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.506031 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.513532 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.513831 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.514043 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.514288 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.514474 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.525192 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.552656 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.571238 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.587363 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.612347 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.618660 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.618701 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.618714 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.618735 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.618751 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.636179 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.663187 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.685890 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.703825 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:28Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.723288 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.723355 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.723375 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.723407 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.723429 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.827370 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.827433 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.827445 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.827462 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.827472 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.931319 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.931410 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.931427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.931457 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:28 crc kubenswrapper[4808]: I0217 15:54:28.931470 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:28Z","lastTransitionTime":"2026-02-17T15:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.045153 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.045193 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.045206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.045224 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.045238 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.106887 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 14:22:14.441856303 +0000 UTC Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.145222 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.145276 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.145378 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:29 crc kubenswrapper[4808]: E0217 15:54:29.145490 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:29 crc kubenswrapper[4808]: E0217 15:54:29.145624 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:29 crc kubenswrapper[4808]: E0217 15:54:29.145745 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.147347 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.147376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.147386 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.147399 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.147410 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.250260 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.250324 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.250335 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.250353 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.250365 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.353681 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.353731 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.353741 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.353759 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.353772 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.421383 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/0.log" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.425125 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.425777 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.446538 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.456773 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.456815 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.456831 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.456851 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.456863 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.462614 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.477493 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.494381 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.515928 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.543701 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:27Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823454 6099 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:54:27.823478 6099 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:54:27.823501 6099 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:54:27.823550 6099 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:54:27.823566 6099 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:54:27.823609 6099 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:54:27.823712 6099 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823793 6099 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:54:27.823869 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:54:27.823886 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:54:27.823927 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:54:27.823948 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:54:27.823967 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:54:27.824263 6099 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.559377 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.559446 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.559465 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.559493 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.559528 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.567671 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.584435 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.598895 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.630075 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.646234 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.662010 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.664846 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.664888 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.664902 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.664923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.664938 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.676235 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.693215 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:29Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.769077 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.769124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.769142 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.769166 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.769184 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.872524 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.872625 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.872644 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.872676 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.872694 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.976385 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.976426 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.976438 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.976458 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:29 crc kubenswrapper[4808]: I0217 15:54:29.976471 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:29Z","lastTransitionTime":"2026-02-17T15:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.080111 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.080217 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.080238 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.080276 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.080300 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.107075 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:39:35.608313609 +0000 UTC Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.184550 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.184657 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.184678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.184708 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.184726 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.287340 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.287426 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.287452 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.287486 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.287512 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.391706 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.391806 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.391830 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.391862 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.391883 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.433015 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/1.log" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.434280 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/0.log" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.440057 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2" exitCode=1 Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.440134 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.440218 4808 scope.go:117] "RemoveContainer" containerID="84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.441655 4808 scope.go:117] "RemoveContainer" containerID="efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2" Feb 17 15:54:30 crc kubenswrapper[4808]: E0217 15:54:30.442034 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.463774 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.490826 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.496257 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.496330 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.496350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.496378 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.496398 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.519463 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.546963 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.574222 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.595974 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.600980 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.601024 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.601038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.601057 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.601071 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.619443 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.637098 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.660025 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.687366 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:27Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823454 6099 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:54:27.823478 6099 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:54:27.823501 6099 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:54:27.823550 6099 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:54:27.823566 6099 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:54:27.823609 6099 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:54:27.823712 6099 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823793 6099 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:54:27.823869 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:54:27.823886 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:54:27.823927 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:54:27.823948 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:54:27.823967 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:54:27.824263 6099 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.705648 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.705949 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.706183 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.706325 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.706419 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.706446 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.728850 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.755631 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.776354 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.817174 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.817277 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.817298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.817327 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.817346 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.920529 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.920658 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.920684 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.920714 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.920735 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:30Z","lastTransitionTime":"2026-02-17T15:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.957563 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6"] Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.958203 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.962885 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.963542 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.980451 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:30 crc kubenswrapper[4808]: I0217 15:54:30.994996 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:30Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.018692 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.023997 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.024083 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.024110 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.024151 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.024185 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.040244 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.062964 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.077624 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/067d21e4-9618-42af-bb01-1ea41d1bd7ef-env-overrides\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.077924 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/067d21e4-9618-42af-bb01-1ea41d1bd7ef-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.077985 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjv2r\" (UniqueName: \"kubernetes.io/projected/067d21e4-9618-42af-bb01-1ea41d1bd7ef-kube-api-access-mjv2r\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.078047 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/067d21e4-9618-42af-bb01-1ea41d1bd7ef-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.081538 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.101217 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.107818 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:44:17.686530551 +0000 UTC Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.121610 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.127376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.127448 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.127467 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.127495 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.127514 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.145620 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.145648 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:31 crc kubenswrapper[4808]: E0217 15:54:31.145797 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:31 crc kubenswrapper[4808]: E0217 15:54:31.145969 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.147067 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.149450 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:31 crc kubenswrapper[4808]: E0217 15:54:31.149616 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.165869 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.178863 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/067d21e4-9618-42af-bb01-1ea41d1bd7ef-env-overrides\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.178951 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/067d21e4-9618-42af-bb01-1ea41d1bd7ef-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.178973 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjv2r\" (UniqueName: \"kubernetes.io/projected/067d21e4-9618-42af-bb01-1ea41d1bd7ef-kube-api-access-mjv2r\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.179004 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/067d21e4-9618-42af-bb01-1ea41d1bd7ef-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.179978 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/067d21e4-9618-42af-bb01-1ea41d1bd7ef-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.180312 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/067d21e4-9618-42af-bb01-1ea41d1bd7ef-env-overrides\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.194923 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.195395 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/067d21e4-9618-42af-bb01-1ea41d1bd7ef-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.207904 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjv2r\" (UniqueName: \"kubernetes.io/projected/067d21e4-9618-42af-bb01-1ea41d1bd7ef-kube-api-access-mjv2r\") pod \"ovnkube-control-plane-749d76644c-86pl6\" (UID: \"067d21e4-9618-42af-bb01-1ea41d1bd7ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.211286 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.230401 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.230481 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.230510 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.230551 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.230607 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.234848 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.268160 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84285376e3391c3ff95b82b22d09c3f0482b993cbcdb226ed8e86f7318a1eab7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:27Z\\\",\\\"message\\\":\\\" reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823454 6099 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0217 15:54:27.823478 6099 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0217 15:54:27.823501 6099 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0217 15:54:27.823550 6099 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0217 15:54:27.823566 6099 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0217 15:54:27.823609 6099 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0217 15:54:27.823712 6099 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0217 15:54:27.823793 6099 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0217 15:54:27.823869 6099 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0217 15:54:27.823886 6099 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0217 15:54:27.823927 6099 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0217 15:54:27.823948 6099 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0217 15:54:27.823967 6099 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0217 15:54:27.824263 6099 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.282777 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.288103 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.333528 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.333619 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.333650 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.333680 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.333702 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.437210 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.437273 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.437296 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.437333 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.437357 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.460039 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" event={"ID":"067d21e4-9618-42af-bb01-1ea41d1bd7ef","Type":"ContainerStarted","Data":"78819e453ccbb6cd63323c69e65a42d589263d8890f5f1c2679def34a5786d56"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.463404 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/1.log" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.468755 4808 scope.go:117] "RemoveContainer" containerID="efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2" Feb 17 15:54:31 crc kubenswrapper[4808]: E0217 15:54:31.468952 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.490246 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.506079 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.524439 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.539106 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.543386 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.543417 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.543428 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.543443 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.543452 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.556249 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.570526 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.584294 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.596127 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.611739 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.644326 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.646183 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.646242 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.646255 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.646279 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.646295 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.658387 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.672653 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.683844 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.698406 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.711877 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.729827 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-z8tn8"] Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.730504 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:31 crc kubenswrapper[4808]: E0217 15:54:31.730633 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.745369 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.749568 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.749608 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.749617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.749632 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.749641 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.760894 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.777652 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.789697 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.804425 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.820074 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.836708 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.846337 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.852187 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.852233 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.852245 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.852263 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.852293 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.867048 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.888509 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.888561 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f79s\" (UniqueName: \"kubernetes.io/projected/b88c3e5f-7390-477c-ae74-aced26a8ddf9-kube-api-access-8f79s\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.895361 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.909313 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.930482 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.942492 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.954753 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.954789 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.954800 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.954817 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.954829 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:31Z","lastTransitionTime":"2026-02-17T15:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.955278 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.965470 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.979294 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:31Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.989804 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:31 crc kubenswrapper[4808]: I0217 15:54:31.989857 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f79s\" (UniqueName: \"kubernetes.io/projected/b88c3e5f-7390-477c-ae74-aced26a8ddf9-kube-api-access-8f79s\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:31 crc kubenswrapper[4808]: E0217 15:54:31.990105 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:31 crc kubenswrapper[4808]: E0217 15:54:31.990236 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:32.490202458 +0000 UTC m=+36.006561571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.009935 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f79s\" (UniqueName: \"kubernetes.io/projected/b88c3e5f-7390-477c-ae74-aced26a8ddf9-kube-api-access-8f79s\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.057969 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.058032 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.058044 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.058066 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.058082 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.108512 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:02:57.486171536 +0000 UTC Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.161646 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.161728 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.161762 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.161813 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.161842 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.264792 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.264860 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.264884 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.264910 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.264929 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.367378 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.367439 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.367451 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.367468 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.367479 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.471049 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.471115 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.471133 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.471160 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.471186 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.474506 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" event={"ID":"067d21e4-9618-42af-bb01-1ea41d1bd7ef","Type":"ContainerStarted","Data":"ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.474615 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" event={"ID":"067d21e4-9618-42af-bb01-1ea41d1bd7ef","Type":"ContainerStarted","Data":"bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.496164 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:32 crc kubenswrapper[4808]: E0217 15:54:32.496407 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:32 crc kubenswrapper[4808]: E0217 15:54:32.496480 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:33.496462419 +0000 UTC m=+37.012821502 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.497220 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.515509 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.530630 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.551008 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.569003 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.574367 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.574417 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.574438 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.574466 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.574485 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.586352 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.607207 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.622329 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.637903 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.657744 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.677358 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.678312 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.678335 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.678361 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.678381 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.680566 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.699764 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.714463 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.734203 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.754042 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.768712 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:32Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.782612 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.782669 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.782683 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.782702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.782716 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.885249 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.885330 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.885350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.885385 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.885407 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.900531 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:32 crc kubenswrapper[4808]: E0217 15:54:32.900776 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:54:48.900731393 +0000 UTC m=+52.417090496 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.988687 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.988742 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.988761 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.988790 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:32 crc kubenswrapper[4808]: I0217 15:54:32.988810 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:32Z","lastTransitionTime":"2026-02-17T15:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.002781 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.002876 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.002931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.002989 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003125 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003145 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003177 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003184 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003210 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003125 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003280 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003306 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003244 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:49.003221962 +0000 UTC m=+52.519581075 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003398 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:49.003358346 +0000 UTC m=+52.519717609 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003433 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:49.003417717 +0000 UTC m=+52.519776820 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.003454 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:49.003443678 +0000 UTC m=+52.519802781 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.008779 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.008838 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.008858 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.008884 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.008905 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.038481 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:33Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.043711 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.043785 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.043809 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.043849 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.043875 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.060017 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:33Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.064216 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.064492 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.064783 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.065029 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.065236 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.078229 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:33Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.083467 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.083555 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.083568 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.083602 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.083614 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.104521 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:33Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.109287 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:29:14.417366291 +0000 UTC Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.110148 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.110207 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.110229 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.110259 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.110284 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.124242 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:33Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.124474 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.127322 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.127376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.127392 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.127414 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.127430 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.145143 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.145177 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.145217 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.145177 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.145352 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.145714 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.145987 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.145992 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.231219 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.231295 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.231315 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.231346 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.231371 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.335931 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.336041 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.336071 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.336110 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.336148 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.439922 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.440028 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.440057 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.440094 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.440120 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.509993 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.510240 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: E0217 15:54:33.510357 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:35.510325475 +0000 UTC m=+39.026684578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.543640 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.543708 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.543729 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.543755 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.543774 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.647462 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.647536 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.647556 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.647617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.647639 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.751265 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.751413 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.751440 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.751472 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.751491 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.855522 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.855647 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.855671 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.855700 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.855719 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.959266 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.959353 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.959381 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.959418 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:33 crc kubenswrapper[4808]: I0217 15:54:33.959450 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:33Z","lastTransitionTime":"2026-02-17T15:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.063167 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.063269 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.063288 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.063318 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.063340 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.110125 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:37:50.567919637 +0000 UTC Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.167084 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.167154 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.167168 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.167197 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.167216 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.271030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.271505 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.271729 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.271947 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.272155 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.375458 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.375522 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.375535 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.375611 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.375624 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.479302 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.479399 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.479413 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.479443 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.479462 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.583566 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.583675 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.583699 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.583730 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.583753 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.698058 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.698162 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.698186 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.698222 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.698247 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.801678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.801724 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.801736 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.801755 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.801769 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.905686 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.905753 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.905771 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.905796 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:34 crc kubenswrapper[4808]: I0217 15:54:34.905814 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:34Z","lastTransitionTime":"2026-02-17T15:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.009785 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.009845 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.009863 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.009895 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.009914 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.111046 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:37:55.356080023 +0000 UTC Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.113797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.113887 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.113908 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.113940 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.113961 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.145833 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.145908 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.145833 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:35 crc kubenswrapper[4808]: E0217 15:54:35.146083 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:35 crc kubenswrapper[4808]: E0217 15:54:35.146179 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:35 crc kubenswrapper[4808]: E0217 15:54:35.146283 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.146682 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:35 crc kubenswrapper[4808]: E0217 15:54:35.146884 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.217195 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.217272 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.217298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.217334 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.217363 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.301504 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.320923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.320991 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.321013 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.321039 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.321059 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.321967 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.343047 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.367155 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.389034 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.412430 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.425632 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.425691 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.425710 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.425739 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.425761 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.428704 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.448895 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.481932 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.507157 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.527284 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.529539 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.529615 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.529634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.529661 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.529681 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.533351 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:35 crc kubenswrapper[4808]: E0217 15:54:35.533681 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:35 crc kubenswrapper[4808]: E0217 15:54:35.533820 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:39.533785554 +0000 UTC m=+43.050144657 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.550443 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.569558 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.595231 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.612117 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.628739 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.634281 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.634351 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.634376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.634407 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.634426 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.653507 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:35Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.738695 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.738782 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.738801 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.738832 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.738862 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.842040 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.842115 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.842132 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.842159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.842176 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.946004 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.946075 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.946092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.946120 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:35 crc kubenswrapper[4808]: I0217 15:54:35.946143 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:35Z","lastTransitionTime":"2026-02-17T15:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.050309 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.050403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.050435 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.050506 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.050528 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.112275 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:19:55.89890822 +0000 UTC Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.156864 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.156909 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.156919 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.156935 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.156946 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.260923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.260993 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.261012 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.261040 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.261057 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.364530 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.364634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.364657 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.364687 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.364710 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.467640 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.467716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.467735 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.467764 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.467784 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.571098 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.571168 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.571179 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.571207 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.571222 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.674744 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.674834 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.674866 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.674902 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.674927 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.778408 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.778484 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.778508 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.778542 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.778565 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.881861 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.881960 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.881991 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.882030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.882055 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.985768 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.985844 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.985866 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.985933 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:36 crc kubenswrapper[4808]: I0217 15:54:36.985957 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:36Z","lastTransitionTime":"2026-02-17T15:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.089218 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.089503 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.089538 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.089612 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.089659 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.113250 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:18:52.192705736 +0000 UTC Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.145958 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.146017 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.145958 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.146239 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:37 crc kubenswrapper[4808]: E0217 15:54:37.146254 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:37 crc kubenswrapper[4808]: E0217 15:54:37.146387 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:37 crc kubenswrapper[4808]: E0217 15:54:37.146629 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:37 crc kubenswrapper[4808]: E0217 15:54:37.146749 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.173211 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.191228 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.193629 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.193698 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.193720 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.193752 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.193774 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.205727 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.225925 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.250223 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.268564 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.286948 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.296093 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.296140 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.296152 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.296173 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.296188 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.307557 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.324784 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.348872 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.374813 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.395623 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.399224 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.399270 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.399284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.399308 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.399323 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.406518 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.420693 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.452649 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.472092 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:37Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.502185 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.502267 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.502301 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.502326 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.502339 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.606071 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.606151 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.606173 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.606209 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.606232 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.709942 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.709989 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.710004 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.710023 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.710038 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.813432 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.813546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.813563 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.813608 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.813627 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.917233 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.917290 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.917308 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.917330 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:37 crc kubenswrapper[4808]: I0217 15:54:37.917345 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:37Z","lastTransitionTime":"2026-02-17T15:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.021222 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.021485 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.021600 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.021696 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.021863 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.113447 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:54:37.878158138 +0000 UTC Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.125350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.125637 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.125800 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.125867 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.125892 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.229333 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.229384 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.229403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.229430 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.229453 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.334936 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.335038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.335057 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.335085 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.335104 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.439720 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.440159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.440299 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.440464 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.440656 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.544990 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.545061 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.545080 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.545111 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.545132 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.647802 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.647854 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.647868 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.647890 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.647902 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.750749 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.750822 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.750841 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.750869 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.750887 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.853540 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.853606 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.853616 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.853631 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.853640 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.956862 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.956923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.956939 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.956959 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:38 crc kubenswrapper[4808]: I0217 15:54:38.956972 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:38Z","lastTransitionTime":"2026-02-17T15:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.060028 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.060090 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.060105 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.060130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.060147 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.114455 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 20:24:10.760533693 +0000 UTC Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.144907 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.145036 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.145062 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:39 crc kubenswrapper[4808]: E0217 15:54:39.145255 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:39 crc kubenswrapper[4808]: E0217 15:54:39.145144 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.145325 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:39 crc kubenswrapper[4808]: E0217 15:54:39.145397 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:39 crc kubenswrapper[4808]: E0217 15:54:39.145475 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.163169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.163212 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.163227 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.163247 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.163261 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.266482 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.266744 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.266772 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.266810 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.266832 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.370260 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.370338 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.370355 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.370379 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.370400 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.474866 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.474908 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.474917 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.474936 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.474947 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.578912 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.579005 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.579029 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.579068 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.579091 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.590758 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:39 crc kubenswrapper[4808]: E0217 15:54:39.591035 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:39 crc kubenswrapper[4808]: E0217 15:54:39.591190 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:54:47.591150218 +0000 UTC m=+51.107509481 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.682433 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.682527 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.682547 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.682611 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.682630 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.785331 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.785380 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.785389 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.785405 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.785415 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.888145 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.888208 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.888227 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.888252 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.888269 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.991226 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.991291 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.991308 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.991332 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:39 crc kubenswrapper[4808]: I0217 15:54:39.991350 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:39Z","lastTransitionTime":"2026-02-17T15:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.093892 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.093956 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.093978 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.094005 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.094024 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.114621 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:09:34.343156238 +0000 UTC Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.197279 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.197356 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.197386 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.197414 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.197432 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.300045 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.300094 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.300105 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.300125 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.300135 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.402888 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.403000 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.403021 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.403052 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.403070 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.506039 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.506093 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.506106 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.506123 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.506136 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.609257 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.609321 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.609341 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.609368 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.609389 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.713043 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.713146 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.713172 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.713209 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.713236 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.816102 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.816156 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.816169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.816190 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.816204 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.919511 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.919561 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.919588 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.919609 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:40 crc kubenswrapper[4808]: I0217 15:54:40.919622 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:40Z","lastTransitionTime":"2026-02-17T15:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.022344 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.022396 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.022411 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.022428 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.022441 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.115652 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:05:27.135852185 +0000 UTC Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.125027 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.125075 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.125092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.125114 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.125136 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.145662 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.145755 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:41 crc kubenswrapper[4808]: E0217 15:54:41.145909 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:41 crc kubenswrapper[4808]: E0217 15:54:41.146174 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.146386 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.146448 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:41 crc kubenswrapper[4808]: E0217 15:54:41.146844 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:41 crc kubenswrapper[4808]: E0217 15:54:41.147077 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.227101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.227141 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.227151 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.227168 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.227178 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.330112 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.330765 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.330937 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.331114 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.331281 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.434659 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.434701 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.434712 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.434729 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.434748 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.537291 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.537722 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.537833 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.537932 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.538035 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.641009 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.641052 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.641065 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.641081 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.641093 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.744245 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.744287 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.744299 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.744316 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.744329 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.847272 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.847345 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.847364 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.847392 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.847410 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.950999 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.951045 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.951062 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.951082 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:41 crc kubenswrapper[4808]: I0217 15:54:41.951100 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:41Z","lastTransitionTime":"2026-02-17T15:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.055218 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.055336 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.055355 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.055383 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.055404 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.115807 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:44:45.505893387 +0000 UTC Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.158844 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.158915 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.158939 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.159045 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.159071 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.262734 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.263029 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.263115 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.263290 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.263377 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.369603 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.369657 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.369669 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.369686 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.369698 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.473401 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.473775 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.473799 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.473822 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.473839 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.577508 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.577552 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.577561 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.577593 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.577603 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.683859 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.684202 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.684288 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.684380 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.684464 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.787318 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.787673 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.787825 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.787955 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.788076 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.890968 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.891014 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.891023 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.891040 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.891051 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.993985 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.994021 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.994030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.994043 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:42 crc kubenswrapper[4808]: I0217 15:54:42.994051 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:42Z","lastTransitionTime":"2026-02-17T15:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.097694 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.097786 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.097807 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.098428 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.098705 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.117192 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:10:00.748856459 +0000 UTC Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.145364 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.145555 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.145779 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.145879 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.145862 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.146151 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.146291 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.146432 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.202294 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.202361 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.202387 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.202421 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.202450 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.305500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.305555 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.305567 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.305608 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.305623 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.322221 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.322293 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.322312 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.322341 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.322362 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.345821 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:43Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.352081 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.352150 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.352169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.352197 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.352217 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.373811 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:43Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.379334 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.379470 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.379491 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.379522 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.379547 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.393209 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:43Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.401358 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.401450 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.401474 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.401512 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.401538 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.416995 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:43Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.423412 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.423489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.423508 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.423538 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.423560 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.441334 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:43Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:43 crc kubenswrapper[4808]: E0217 15:54:43.441610 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.445069 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.445138 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.445156 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.445181 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.445198 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.548772 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.548851 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.548881 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.548911 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.548931 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.658713 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.658785 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.658800 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.658844 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.658855 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.762443 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.762489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.762499 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.762515 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.762529 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.871619 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.871658 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.871669 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.871687 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.871698 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.974899 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.974969 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.974982 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.975002 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:43 crc kubenswrapper[4808]: I0217 15:54:43.975015 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:43Z","lastTransitionTime":"2026-02-17T15:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.077766 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.077804 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.077812 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.077826 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.077834 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.117438 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 10:37:13.588781464 +0000 UTC Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.180427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.180488 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.180500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.180522 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.180537 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.283795 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.283834 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.283843 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.283857 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.283866 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.386795 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.386954 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.386986 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.387019 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.387044 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.489517 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.489558 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.489601 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.489620 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.489632 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.592791 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.592871 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.592883 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.592901 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.592913 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.695964 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.696007 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.696016 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.696031 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.696044 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.799116 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.799160 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.799168 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.799185 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.799194 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.901839 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.901881 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.901895 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.901913 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:44 crc kubenswrapper[4808]: I0217 15:54:44.901925 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:44Z","lastTransitionTime":"2026-02-17T15:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.004636 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.004677 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.004690 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.004705 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.004714 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.107291 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.107326 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.107335 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.107348 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.107359 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.118449 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:58:10.264282351 +0000 UTC Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.145073 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:45 crc kubenswrapper[4808]: E0217 15:54:45.145657 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.145839 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.145889 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.145983 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:45 crc kubenswrapper[4808]: E0217 15:54:45.146165 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:45 crc kubenswrapper[4808]: E0217 15:54:45.146965 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:45 crc kubenswrapper[4808]: E0217 15:54:45.147133 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.210168 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.210216 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.210227 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.210246 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.210259 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.312877 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.312936 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.312955 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.312979 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.312996 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.416881 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.416954 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.416972 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.416997 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.417015 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.520664 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.520739 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.520762 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.520809 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.520836 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.626039 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.626100 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.626117 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.626138 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.626154 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.729375 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.729445 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.729462 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.729487 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.729506 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.832376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.832451 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.832482 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.832516 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.832541 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.935016 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.935071 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.935085 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.935106 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:45 crc kubenswrapper[4808]: I0217 15:54:45.935118 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:45Z","lastTransitionTime":"2026-02-17T15:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.039756 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.039816 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.039835 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.039860 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.039879 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.119132 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:52:53.426421425 +0000 UTC Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.144191 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.144273 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.144295 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.144326 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.144347 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.145973 4808 scope.go:117] "RemoveContainer" containerID="efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.248701 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.249142 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.249172 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.249197 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.249213 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.352330 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.352390 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.352415 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.352448 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.352473 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.455771 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.455836 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.455858 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.455885 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.455904 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.536154 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/1.log" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.540671 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.541985 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.558902 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.560792 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.560871 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.560893 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.560926 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.560948 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.589867 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.627333 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.654105 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.664642 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.664716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.664745 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.664774 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.664794 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.678107 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.702126 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.720016 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.744288 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.762925 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.768793 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.768823 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.768834 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.768852 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.768863 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.779792 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.793202 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.811136 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.826125 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.839940 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.858531 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.873023 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.873107 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.873135 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.873170 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.873198 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.874710 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:46Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.976282 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.976347 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.976360 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.976379 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:46 crc kubenswrapper[4808]: I0217 15:54:46.976393 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:46Z","lastTransitionTime":"2026-02-17T15:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.081101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.081150 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.081159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.081180 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.081192 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.119526 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 07:49:28.729492608 +0000 UTC Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.144960 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.145051 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.145164 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:47 crc kubenswrapper[4808]: E0217 15:54:47.145159 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.145325 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:47 crc kubenswrapper[4808]: E0217 15:54:47.145440 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:47 crc kubenswrapper[4808]: E0217 15:54:47.145315 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:47 crc kubenswrapper[4808]: E0217 15:54:47.145780 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.166736 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.187672 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.187745 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.187768 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.187798 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.187816 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.187889 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.203671 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.218566 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.229389 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.249091 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.261515 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.276458 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.289329 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.290952 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.290985 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.290996 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.291012 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.291022 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.303170 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.314711 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.330053 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.342501 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.357042 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.372234 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.383822 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.393815 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.393872 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.393889 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.393913 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.393929 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.497743 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.497804 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.497814 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.497833 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.497844 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.547428 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/2.log" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.548313 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/1.log" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.552061 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673" exitCode=1 Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.552104 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.552157 4808 scope.go:117] "RemoveContainer" containerID="efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.553102 4808 scope.go:117] "RemoveContainer" containerID="5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673" Feb 17 15:54:47 crc kubenswrapper[4808]: E0217 15:54:47.553340 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.576918 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.596240 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.602524 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.602629 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.602649 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.602678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.602697 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.617564 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.633265 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.653148 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.681181 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:47 crc kubenswrapper[4808]: E0217 15:54:47.681497 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:47 crc kubenswrapper[4808]: E0217 15:54:47.681618 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:03.681566257 +0000 UTC m=+67.197925370 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.682650 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efef33a328c17ebb52448542ea1a70587b2bd3219b0f9bbd3eec8074885d14d2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:29Z\\\",\\\"message\\\":\\\"false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.138:50051:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {97419c58-41c7-41d7-a137-a446f0c7eeb3}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0217 15:54:29.419850 6225 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0217 15:54:29.420431 6225 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-config-operator/machine-config-daemon]} name:Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.43:8798: 10.217.4.43:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0217 15:54:29.420614 6225 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.705960 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.706012 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.706030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.706054 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.706071 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.731838 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.757545 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.779794 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.797071 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.809678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.809748 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.809775 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.809847 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.809875 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.814083 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.837017 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.851863 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.871487 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.896374 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.913189 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.913239 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.913252 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.913272 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.913287 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:47Z","lastTransitionTime":"2026-02-17T15:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:47 crc kubenswrapper[4808]: I0217 15:54:47.913307 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.016351 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.016425 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.016445 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.016473 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.016496 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.119717 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.119787 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.119806 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.119755 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:39:02.480683182 +0000 UTC Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.119838 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.119921 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.222829 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.222893 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.222911 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.222939 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.222959 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.326366 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.326450 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.326475 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.326505 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.326525 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.430354 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.430412 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.430430 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.430462 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.430480 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.533702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.533797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.533829 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.533867 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.533894 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.559370 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/2.log" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.565533 4808 scope.go:117] "RemoveContainer" containerID="5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673" Feb 17 15:54:48 crc kubenswrapper[4808]: E0217 15:54:48.565912 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.589543 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.610517 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.633384 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.636810 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.636876 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.636887 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.636905 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.636918 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.655568 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.678854 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.699825 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.714681 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.731790 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.740016 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.740084 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.740101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.740122 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.740138 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.751152 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.767454 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.784349 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.799991 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.813765 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.828170 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.842755 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.842835 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.842858 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.842892 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.842919 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.860103 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.888117 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:48Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.945691 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.945739 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.945753 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.945774 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.945787 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:48Z","lastTransitionTime":"2026-02-17T15:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:48 crc kubenswrapper[4808]: I0217 15:54:48.999015 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:54:48 crc kubenswrapper[4808]: E0217 15:54:48.999323 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:55:20.999270684 +0000 UTC m=+84.515629797 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.051104 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.051137 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.051146 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.051164 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.051179 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.101168 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.101230 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.101265 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.101296 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101376 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101437 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101463 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101477 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101442 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:21.101424073 +0000 UTC m=+84.617783166 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101535 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:21.101523366 +0000 UTC m=+84.617882449 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101623 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101641 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101658 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101685 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101716 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:21.101699621 +0000 UTC m=+84.618058714 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.101956 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:21.101877305 +0000 UTC m=+84.618236378 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.120411 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 09:56:05.744036073 +0000 UTC Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.144812 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.144976 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.144981 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.145090 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.145123 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.145641 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.145686 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:49 crc kubenswrapper[4808]: E0217 15:54:49.145279 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.157080 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.157135 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.157151 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.157175 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.157194 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.261152 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.261546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.261570 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.261626 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.261646 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.364565 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.364660 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.364679 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.364704 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.364722 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.468784 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.468850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.468871 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.468901 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.468921 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.571049 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.571111 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.571130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.571155 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.571177 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.674893 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.674942 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.674959 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.674986 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.675007 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.742802 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.762643 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.774665 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.780348 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.780427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.780448 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.780477 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.780499 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.797339 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.816736 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.835334 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.859104 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.882707 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.884222 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.885548 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.885658 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.885777 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.885803 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.903901 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.928469 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.958242 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.981978 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:49Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.988951 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.989031 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.989051 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.989088 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:49 crc kubenswrapper[4808]: I0217 15:54:49.989109 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:49Z","lastTransitionTime":"2026-02-17T15:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.004593 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:50Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.056429 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:50Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.075886 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:50Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.092534 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.092850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.092937 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.093009 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.093081 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.105795 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:50Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.121495 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:32:05.709766168 +0000 UTC Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.132031 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:50Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.149625 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:50Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.196212 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.196309 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.196332 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.196370 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.196403 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.299456 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.299538 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.299556 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.299610 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.299633 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.403176 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.403231 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.403249 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.403275 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.403295 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.506006 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.506083 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.506108 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.506148 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.506178 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.609529 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.609617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.609630 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.609651 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.609665 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.713222 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.713333 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.713358 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.713389 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.713412 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.817343 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.817413 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.817430 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.817461 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.817481 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.920863 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.920948 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.920987 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.921028 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:50 crc kubenswrapper[4808]: I0217 15:54:50.921055 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:50Z","lastTransitionTime":"2026-02-17T15:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.025135 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.025215 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.025233 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.025264 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.025284 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.122683 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:53:48.028469062 +0000 UTC Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.129159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.129230 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.129249 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.129284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.129307 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.145773 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:51 crc kubenswrapper[4808]: E0217 15:54:51.146183 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.146305 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.146403 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:51 crc kubenswrapper[4808]: E0217 15:54:51.146520 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:51 crc kubenswrapper[4808]: E0217 15:54:51.146644 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.146916 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:51 crc kubenswrapper[4808]: E0217 15:54:51.147216 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.233100 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.233192 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.233217 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.233251 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.233277 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.337005 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.337101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.337128 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.337164 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.337191 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.441171 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.441244 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.441263 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.441293 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.441315 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.546045 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.546120 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.546144 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.546174 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.546197 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.650682 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.650764 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.650784 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.650813 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.650836 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.754249 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.754332 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.754345 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.754366 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.754377 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.857729 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.857807 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.857825 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.857853 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.857873 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.962113 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.962186 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.962206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.962243 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:51 crc kubenswrapper[4808]: I0217 15:54:51.962264 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:51Z","lastTransitionTime":"2026-02-17T15:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.065800 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.065864 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.065883 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.065916 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.065946 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.123219 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:04:09.428987082 +0000 UTC Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.170455 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.170526 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.170546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.170617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.170647 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.274166 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.274278 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.274314 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.274364 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.274392 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.378041 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.378102 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.378121 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.378151 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.378169 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.481514 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.481607 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.481626 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.481652 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.481671 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.585101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.585159 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.585175 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.585200 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.585218 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.691850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.691920 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.691940 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.691966 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.691986 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.796063 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.796122 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.796138 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.796164 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.796182 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.899392 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.899450 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.899462 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.899484 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:52 crc kubenswrapper[4808]: I0217 15:54:52.899500 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:52Z","lastTransitionTime":"2026-02-17T15:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.003702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.003772 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.003789 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.003815 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.003835 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.107518 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.107649 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.107678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.107718 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.107753 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.123837 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:57:02.489994545 +0000 UTC Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.145613 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.145611 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.145854 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.145990 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.146158 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.145817 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.146391 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.146547 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.210354 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.210417 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.210434 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.210460 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.210481 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.313357 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.313408 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.313416 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.313433 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.313444 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.417298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.417351 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.417365 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.417386 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.417402 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.520833 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.520920 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.520950 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.520988 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.521009 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.624273 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.624334 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.624352 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.624381 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.624402 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.729489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.729565 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.729640 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.729676 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.729703 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.731400 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.731474 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.731494 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.731526 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.731546 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.750817 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:53Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.757115 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.757173 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.757193 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.757221 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.757241 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.775743 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:53Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.782092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.782146 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.782164 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.782183 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.782200 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.797743 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:53Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.802785 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.802830 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.802847 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.802867 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.802880 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.818696 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:53Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.823272 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.823333 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.823352 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.823379 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.823399 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.839894 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:53Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:53 crc kubenswrapper[4808]: E0217 15:54:53.840115 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.842344 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.842403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.842422 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.842443 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.842461 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.945985 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.946052 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.946070 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.946097 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:53 crc kubenswrapper[4808]: I0217 15:54:53.946117 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:53Z","lastTransitionTime":"2026-02-17T15:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.049819 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.049896 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.049912 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.049937 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.049958 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.124718 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:23:18.17107601 +0000 UTC Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.162716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.162765 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.162850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.162868 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.162882 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.266162 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.266502 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.266615 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.266740 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.266858 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.370611 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.370719 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.370744 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.370771 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.370790 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.473850 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.473921 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.473943 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.473971 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.473992 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.577728 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.577805 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.577823 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.577849 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.577869 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.681557 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.681677 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.681703 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.681736 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.681758 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.784722 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.784786 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.784797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.784814 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.784828 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.887903 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.887935 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.887944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.887959 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.887969 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.990563 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.990631 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.990643 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.990666 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:54 crc kubenswrapper[4808]: I0217 15:54:54.990680 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:54Z","lastTransitionTime":"2026-02-17T15:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.094074 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.094145 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.094169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.094201 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.094224 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.124881 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 08:51:26.626848063 +0000 UTC Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.145727 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.145939 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:55 crc kubenswrapper[4808]: E0217 15:54:55.146032 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.146095 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.146053 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:55 crc kubenswrapper[4808]: E0217 15:54:55.146229 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:55 crc kubenswrapper[4808]: E0217 15:54:55.146433 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:55 crc kubenswrapper[4808]: E0217 15:54:55.146651 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.198305 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.198400 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.198417 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.198452 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.198474 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.303011 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.303058 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.303073 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.303095 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.303106 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.406295 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.406624 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.406712 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.406863 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.406927 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.509938 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.510210 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.510281 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.510344 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.510414 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.613459 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.613533 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.613551 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.613617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.613635 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.716540 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.716598 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.716606 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.716621 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.716631 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.819741 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.819783 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.819797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.819819 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.819833 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.923234 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.923291 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.923312 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.923336 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:55 crc kubenswrapper[4808]: I0217 15:54:55.923349 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:55Z","lastTransitionTime":"2026-02-17T15:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.027776 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.027848 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.027865 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.027899 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.027919 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.125782 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:05:57.848483684 +0000 UTC Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.131132 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.131217 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.131235 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.131299 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.131320 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.234902 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.234975 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.234994 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.235020 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.235040 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.339086 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.339199 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.339217 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.339253 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.339272 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.441897 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.441950 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.441963 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.441983 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.441997 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.545361 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.545435 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.545457 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.545485 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.545505 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.649411 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.649481 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.649500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.649533 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.649551 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.759454 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.759533 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.759555 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.759700 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.759739 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.863000 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.863084 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.863124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.863163 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.863188 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.967154 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.967237 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.967261 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.967292 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:56 crc kubenswrapper[4808]: I0217 15:54:56.967314 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:56Z","lastTransitionTime":"2026-02-17T15:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.070320 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.070403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.070413 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.070432 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.070448 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.126517 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:19:48.13221218 +0000 UTC Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.145544 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.145642 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.145670 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:57 crc kubenswrapper[4808]: E0217 15:54:57.145763 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.145922 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:57 crc kubenswrapper[4808]: E0217 15:54:57.146105 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:57 crc kubenswrapper[4808]: E0217 15:54:57.146325 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:57 crc kubenswrapper[4808]: E0217 15:54:57.146986 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.165713 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.172998 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.173073 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.173098 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.173127 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.173149 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.183271 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.207556 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.226660 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.245786 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.271746 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.276010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.276058 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.276076 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.276101 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.276117 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.301280 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.318405 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.341054 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.368189 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.379871 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.379949 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.379968 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.380004 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.380023 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.387734 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.409287 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.428213 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.444700 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.461534 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.483533 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.484037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.484075 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.484086 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.484108 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.484121 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.503217 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:57Z is after 2025-08-24T17:21:41Z" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.587568 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.587667 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.587683 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.587712 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.587731 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.691149 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.691212 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.691230 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.691256 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.691275 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.794622 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.794690 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.794710 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.794740 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.794759 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.898285 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.898345 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.898367 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.898394 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:57 crc kubenswrapper[4808]: I0217 15:54:57.898413 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:57Z","lastTransitionTime":"2026-02-17T15:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.001312 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.001383 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.001404 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.001430 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.001447 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.105055 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.105124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.105143 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.105171 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.105188 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.127508 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:49:11.014264329 +0000 UTC Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.208893 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.208971 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.208990 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.209017 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.209036 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.313069 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.313149 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.313197 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.313236 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.313258 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.416690 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.416737 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.416749 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.416767 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.416778 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.519901 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.519969 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.519984 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.520012 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.520032 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.622783 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.622841 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.622853 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.622873 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.622885 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.726993 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.727064 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.727079 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.727111 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.727124 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.830442 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.830518 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.830537 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.830566 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.830652 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.934256 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.934329 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.934347 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.934373 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:58 crc kubenswrapper[4808]: I0217 15:54:58.934391 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:58Z","lastTransitionTime":"2026-02-17T15:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.042515 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.042564 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.042603 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.042627 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.042641 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.127951 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:36:46.317969178 +0000 UTC Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.144847 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.144901 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.144967 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.145058 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:54:59 crc kubenswrapper[4808]: E0217 15:54:59.145612 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:54:59 crc kubenswrapper[4808]: E0217 15:54:59.145740 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:54:59 crc kubenswrapper[4808]: E0217 15:54:59.145898 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:54:59 crc kubenswrapper[4808]: E0217 15:54:59.146030 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.153019 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.153092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.153110 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.153139 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.153158 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.257284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.257365 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.257390 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.257420 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.257446 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.361114 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.361221 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.361239 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.361266 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.361287 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.464124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.464216 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.464226 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.464245 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.464255 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.567087 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.567174 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.567193 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.567222 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.567242 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.671593 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.671659 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.671675 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.671697 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.671708 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.774720 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.774799 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.774823 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.774855 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.774875 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.878013 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.878071 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.878088 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.878112 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.878131 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.982106 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.982235 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.982264 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.982303 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:54:59 crc kubenswrapper[4808]: I0217 15:54:59.982333 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:54:59Z","lastTransitionTime":"2026-02-17T15:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.086625 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.086727 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.086757 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.086794 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.086813 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.128976 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:25:47.909029977 +0000 UTC Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.190479 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.190558 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.190606 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.190639 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.190661 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.293702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.293777 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.293795 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.293824 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.293843 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.397060 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.397133 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.397149 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.397179 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.397196 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.500415 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.500513 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.500535 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.500566 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.500619 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.603802 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.603875 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.603895 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.603925 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.603947 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.706473 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.706533 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.706544 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.706567 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.706598 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.810257 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.810333 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.810364 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.810392 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.810414 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.913634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.913710 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.913728 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.913754 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:00 crc kubenswrapper[4808]: I0217 15:55:00.913774 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:00Z","lastTransitionTime":"2026-02-17T15:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.017530 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.017653 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.017676 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.017707 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.017729 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.120342 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.120383 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.120411 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.120434 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.120466 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.130204 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:08:22.920652421 +0000 UTC Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.145713 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:01 crc kubenswrapper[4808]: E0217 15:55:01.145850 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.145968 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.145992 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.145960 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:01 crc kubenswrapper[4808]: E0217 15:55:01.146078 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:01 crc kubenswrapper[4808]: E0217 15:55:01.146210 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:01 crc kubenswrapper[4808]: E0217 15:55:01.146454 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.223058 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.223121 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.223140 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.223176 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.223198 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.327605 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.327647 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.327657 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.327686 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.327698 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.430554 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.430625 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.430635 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.430659 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.430672 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.533521 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.533604 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.533620 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.533640 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.533655 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.635542 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.635605 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.635619 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.635635 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.635648 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.738543 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.738623 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.738653 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.738674 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.738686 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.841250 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.841341 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.841352 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.841394 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.841408 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.943979 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.944025 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.944037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.944057 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:01 crc kubenswrapper[4808]: I0217 15:55:01.944071 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:01Z","lastTransitionTime":"2026-02-17T15:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.048279 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.048376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.048401 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.048436 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.048462 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.130364 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:06:17.32783357 +0000 UTC Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.145746 4808 scope.go:117] "RemoveContainer" containerID="5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673" Feb 17 15:55:02 crc kubenswrapper[4808]: E0217 15:55:02.146056 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.152010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.152075 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.152089 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.152108 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.152122 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.254440 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.254494 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.254506 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.254525 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.254541 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.357856 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.357909 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.357923 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.357945 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.357962 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.461499 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.461551 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.461567 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.461613 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.461631 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.564848 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.564900 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.564912 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.564933 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.564946 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.667930 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.667996 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.668010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.668033 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.668055 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.771198 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.771259 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.771274 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.771293 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.771307 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.873594 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.873647 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.873661 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.873680 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.873693 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.976935 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.977017 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.977037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.977069 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:02 crc kubenswrapper[4808]: I0217 15:55:02.977094 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:02Z","lastTransitionTime":"2026-02-17T15:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.079397 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.079465 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.079478 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.079500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.079514 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.131137 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:26:51.87595655 +0000 UTC Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.144854 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.144881 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.144940 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.144858 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:03 crc kubenswrapper[4808]: E0217 15:55:03.145028 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:03 crc kubenswrapper[4808]: E0217 15:55:03.145105 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:03 crc kubenswrapper[4808]: E0217 15:55:03.145196 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:03 crc kubenswrapper[4808]: E0217 15:55:03.145412 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.182483 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.182546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.182560 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.182606 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.182619 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.285015 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.285060 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.285074 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.285093 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.285104 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.388380 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.388460 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.388477 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.388499 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.388513 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.492038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.492144 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.492169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.492206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.492232 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.596099 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.596170 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.596182 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.596211 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.596222 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.682432 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:03 crc kubenswrapper[4808]: E0217 15:55:03.682673 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:03 crc kubenswrapper[4808]: E0217 15:55:03.682742 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:55:35.682726124 +0000 UTC m=+99.199085197 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.699018 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.699094 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.699114 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.699155 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.699174 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.801627 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.801695 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.801708 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.801730 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.801743 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.905680 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.905744 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.905761 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.905791 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:03 crc kubenswrapper[4808]: I0217 15:55:03.905810 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:03Z","lastTransitionTime":"2026-02-17T15:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.009473 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.009528 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.009537 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.009556 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.009569 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.011271 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.011363 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.011390 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.011427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.011455 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: E0217 15:55:04.039714 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:04Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.045499 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.045536 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.045550 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.045567 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.045612 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: E0217 15:55:04.063727 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:04Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.069064 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.069140 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.069155 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.069170 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.069180 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: E0217 15:55:04.089660 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:04Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.094731 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.094812 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.094832 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.094860 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.094881 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: E0217 15:55:04.110348 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:04Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.115294 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.115350 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.115366 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.115388 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.115404 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: E0217 15:55:04.128851 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:04Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:04 crc kubenswrapper[4808]: E0217 15:55:04.129013 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.131243 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:53:32.434478704 +0000 UTC Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.131602 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.131662 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.131680 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.131717 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.131747 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.235648 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.235736 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.235756 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.235795 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.235816 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.339869 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.340252 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.340345 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.340821 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.340933 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.444433 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.444518 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.444536 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.444599 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.444629 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.548276 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.548369 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.548393 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.548427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.548451 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.650742 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.650834 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.650862 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.650944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.651013 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.755280 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.755337 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.755354 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.755379 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.755398 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.859719 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.859786 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.859807 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.859835 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.859854 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.962628 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.962707 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.962723 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.962751 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:04 crc kubenswrapper[4808]: I0217 15:55:04.962776 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:04Z","lastTransitionTime":"2026-02-17T15:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.066095 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.066180 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.066198 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.066229 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.066249 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.132124 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 14:22:36.677875395 +0000 UTC Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.145908 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.145933 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:05 crc kubenswrapper[4808]: E0217 15:55:05.146054 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.146054 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.146138 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:05 crc kubenswrapper[4808]: E0217 15:55:05.146362 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:05 crc kubenswrapper[4808]: E0217 15:55:05.146756 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:05 crc kubenswrapper[4808]: E0217 15:55:05.146683 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.168653 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.168691 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.168702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.168722 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.168734 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.272025 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.272077 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.272089 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.272111 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.272124 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.375204 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.375256 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.375266 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.375285 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.375295 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.478649 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.478720 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.478745 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.478779 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.478806 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.581488 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.581534 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.581546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.581567 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.581595 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.684482 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.684555 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.684596 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.684625 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.684644 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.787339 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.787410 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.787427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.787454 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.787467 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.890820 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.890877 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.897815 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.898026 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:05 crc kubenswrapper[4808]: I0217 15:55:05.898083 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:05Z","lastTransitionTime":"2026-02-17T15:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.002401 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.002482 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.002503 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.002534 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.002560 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.105383 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.105437 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.105452 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.105476 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.105557 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.132987 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 16:34:25.602605336 +0000 UTC Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.209141 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.209212 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.209234 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.209262 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.209310 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.312175 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.312237 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.312256 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.312287 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.312304 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.416098 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.416151 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.416169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.416195 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.416211 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.518479 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.518546 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.518565 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.518618 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.518638 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.621321 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.621380 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.621398 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.621425 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.621447 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.635698 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/0.log" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.635787 4808 generic.go:334] "Generic (PLEG): container finished" podID="18916d6d-e063-40a0-816f-554f95cd2956" containerID="d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1" exitCode=1 Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.635838 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerDied","Data":"d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.636530 4808 scope.go:117] "RemoveContainer" containerID="d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.660093 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.682613 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.698123 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.714460 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.727195 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.727233 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.727246 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.727265 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.727279 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.736599 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:05Z\\\",\\\"message\\\":\\\"2026-02-17T15:54:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b\\\\n2026-02-17T15:54:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b to /host/opt/cni/bin/\\\\n2026-02-17T15:54:20Z [verbose] multus-daemon started\\\\n2026-02-17T15:54:20Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.760036 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.777303 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.798494 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.817099 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.830642 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.830707 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.830723 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.830753 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.830774 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.834182 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.848989 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.865521 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.880872 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.894330 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.907909 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.919766 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.933240 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.933276 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.933285 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.933301 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.933311 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:06Z","lastTransitionTime":"2026-02-17T15:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:06 crc kubenswrapper[4808]: I0217 15:55:06.940853 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:06Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.036341 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.036399 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.036409 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.036428 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.036450 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.133616 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 18:26:11.207603569 +0000 UTC Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.139238 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.139295 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.139310 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.139332 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.139347 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.145462 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.145462 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.145551 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.145657 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:07 crc kubenswrapper[4808]: E0217 15:55:07.145868 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:07 crc kubenswrapper[4808]: E0217 15:55:07.145953 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:07 crc kubenswrapper[4808]: E0217 15:55:07.145996 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:07 crc kubenswrapper[4808]: E0217 15:55:07.146175 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.160773 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.175112 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.193967 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.219139 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.244791 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.247852 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.248133 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.248145 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.248167 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.248185 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.263795 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.274857 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.289741 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:05Z\\\",\\\"message\\\":\\\"2026-02-17T15:54:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b\\\\n2026-02-17T15:54:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b to /host/opt/cni/bin/\\\\n2026-02-17T15:54:20Z [verbose] multus-daemon started\\\\n2026-02-17T15:54:20Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.310716 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.323187 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.338082 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.350879 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.350938 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.350956 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.350977 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.350991 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.353058 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.363521 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.374659 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.389111 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.400555 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.414472 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.454183 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.454230 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.454241 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.454257 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.454267 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.557621 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.558007 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.558018 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.558039 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.558054 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.640815 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/0.log" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.640878 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerStarted","Data":"7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.658748 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.660888 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.660953 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.660969 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.660988 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.661001 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.677083 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.693738 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.713790 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.727247 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.740196 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.755819 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.763561 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.763621 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.763633 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.763652 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.763665 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.770397 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.787823 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.801186 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.847036 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:05Z\\\",\\\"message\\\":\\\"2026-02-17T15:54:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b\\\\n2026-02-17T15:54:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b to /host/opt/cni/bin/\\\\n2026-02-17T15:54:20Z [verbose] multus-daemon started\\\\n2026-02-17T15:54:20Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.867092 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.867157 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.867174 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.867203 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.867223 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.870160 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.887123 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.912427 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.931974 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.944836 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.956018 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:07Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.970374 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.970432 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.970454 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.970481 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:07 crc kubenswrapper[4808]: I0217 15:55:07.970501 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:07Z","lastTransitionTime":"2026-02-17T15:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.074196 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.074265 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.074277 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.074299 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.074313 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.134197 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 20:05:06.130837619 +0000 UTC Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.177889 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.177951 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.177970 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.177994 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.178013 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.280926 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.281004 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.281027 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.281054 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.281078 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.384623 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.384731 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.384797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.384823 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.384841 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.487797 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.487878 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.487897 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.487926 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.487947 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.591501 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.591627 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.591657 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.591694 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.591718 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.695424 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.695506 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.695525 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.695559 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.695610 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.799298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.799351 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.799364 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.799382 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.799395 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.903337 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.903408 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.903429 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.903460 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:08 crc kubenswrapper[4808]: I0217 15:55:08.903481 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:08Z","lastTransitionTime":"2026-02-17T15:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.006348 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.006408 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.006421 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.006444 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.006460 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.108928 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.108986 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.109000 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.109021 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.109034 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.135235 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:46:13.981476476 +0000 UTC Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.145706 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.145790 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:09 crc kubenswrapper[4808]: E0217 15:55:09.146055 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.146110 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:09 crc kubenswrapper[4808]: E0217 15:55:09.146176 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:09 crc kubenswrapper[4808]: E0217 15:55:09.146353 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.146452 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:09 crc kubenswrapper[4808]: E0217 15:55:09.146878 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.212497 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.212545 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.212558 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.212602 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.212616 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.315776 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.315873 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.315891 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.315916 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.315934 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.419206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.419259 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.419276 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.419300 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.419318 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.522793 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.522844 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.522854 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.522872 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.522882 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.626873 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.626944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.626957 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.626990 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.627005 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.729769 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.729829 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.729839 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.729861 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.729875 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.833280 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.833378 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.833394 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.833415 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.833428 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.936590 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.936649 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.936662 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.936684 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:09 crc kubenswrapper[4808]: I0217 15:55:09.936702 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:09Z","lastTransitionTime":"2026-02-17T15:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.040273 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.040332 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.040343 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.040362 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.040374 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.136351 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 23:07:56.272787719 +0000 UTC Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.144666 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.144748 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.144774 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.144808 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.144834 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.247457 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.247519 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.247538 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.247626 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.247653 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.351521 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.351618 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.351637 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.351664 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.351689 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.460484 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.460553 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.460613 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.460649 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.460675 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.563303 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.563377 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.563389 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.563406 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.563419 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.666427 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.666479 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.666495 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.666516 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.666529 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.769629 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.769686 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.769700 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.769724 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.769739 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.872435 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.872495 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.872511 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.872533 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.872547 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.976472 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.976552 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.976617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.976652 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:10 crc kubenswrapper[4808]: I0217 15:55:10.976674 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:10Z","lastTransitionTime":"2026-02-17T15:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.080376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.080466 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.080489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.080523 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.080545 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.137482 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:59:46.503713691 +0000 UTC Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.144842 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:11 crc kubenswrapper[4808]: E0217 15:55:11.145029 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.145165 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.145251 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.145305 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:11 crc kubenswrapper[4808]: E0217 15:55:11.145343 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:11 crc kubenswrapper[4808]: E0217 15:55:11.145682 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:11 crc kubenswrapper[4808]: E0217 15:55:11.146026 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.183975 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.184017 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.184034 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.184056 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.184074 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.287462 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.287513 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.287529 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.287551 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.287569 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.391688 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.391763 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.391781 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.391812 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.391831 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.495206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.495286 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.495311 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.495342 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.495364 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.599715 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.599782 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.599800 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.599828 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.599847 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.703165 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.703243 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.703265 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.703293 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.703320 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.807971 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.808047 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.808066 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.808096 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.808120 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.911681 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.911776 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.911804 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.911841 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:11 crc kubenswrapper[4808]: I0217 15:55:11.911864 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:11Z","lastTransitionTime":"2026-02-17T15:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.015395 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.015445 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.015456 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.015476 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.015489 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.118668 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.118739 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.118760 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.118794 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.118824 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.138597 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:44:54.513654428 +0000 UTC Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.222661 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.222729 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.222748 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.222782 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.222803 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.326147 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.326206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.326227 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.326258 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.326276 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.430012 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.430115 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.430165 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.430193 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.430214 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.533477 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.533544 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.533607 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.533642 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.533667 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.638023 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.638104 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.638124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.638154 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.638172 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.741935 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.741999 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.742013 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.742038 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.742056 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.846104 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.846168 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.846180 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.846202 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.846217 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.949403 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.949521 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.949548 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.949614 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:12 crc kubenswrapper[4808]: I0217 15:55:12.949667 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:12Z","lastTransitionTime":"2026-02-17T15:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.054124 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.054242 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.054267 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.054300 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.054320 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.139623 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:54:16.680292903 +0000 UTC Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.145038 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:13 crc kubenswrapper[4808]: E0217 15:55:13.145474 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.145753 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.145816 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:13 crc kubenswrapper[4808]: E0217 15:55:13.146926 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:13 crc kubenswrapper[4808]: E0217 15:55:13.147046 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.147462 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:13 crc kubenswrapper[4808]: E0217 15:55:13.147639 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.156805 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.156852 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.156866 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.156884 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.156897 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.260437 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.260533 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.260560 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.260628 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.260651 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.363612 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.363687 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.363704 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.363733 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.363752 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.467229 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.467305 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.467324 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.467355 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.467379 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.570282 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.570349 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.570368 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.570397 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.570418 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.672839 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.672904 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.672922 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.672944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.672961 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.776841 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.776903 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.776921 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.776949 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.776970 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.880852 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.880938 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.880962 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.880986 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.880999 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.984664 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.984730 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.984742 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.984770 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:13 crc kubenswrapper[4808]: I0217 15:55:13.984785 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:13Z","lastTransitionTime":"2026-02-17T15:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.088639 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.088731 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.088761 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.088798 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.088825 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.141990 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:41:02.529625845 +0000 UTC Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.192383 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.192440 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.192457 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.192484 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.192502 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.296483 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.296554 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.296605 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.296636 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.296655 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.334202 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.334265 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.334283 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.334311 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.334329 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: E0217 15:55:14.357018 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.363766 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.363854 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.363884 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.363922 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.363947 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: E0217 15:55:14.385990 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.393106 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.393189 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.393213 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.393246 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.393271 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: E0217 15:55:14.414690 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.420566 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.420669 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.420688 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.420719 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.420740 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: E0217 15:55:14.441554 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.447388 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.447453 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.447472 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.447500 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.447522 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: E0217 15:55:14.469315 4808 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7379f6dd-5937-4d60-901f-8c9dc45481b3\\\",\\\"systemUUID\\\":\\\"8fe3bc97-dd01-4038-9ff9-743e71f8162b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:14Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:14 crc kubenswrapper[4808]: E0217 15:55:14.469531 4808 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.472346 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.472489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.472526 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.472559 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.472616 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.576538 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.576648 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.576668 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.576697 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.576718 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.679279 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.679361 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.679384 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.679419 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.679439 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.783611 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.783687 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.783710 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.783741 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.783762 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.887298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.887373 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.887394 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.887423 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.887445 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.990186 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.990226 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.990238 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.990257 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:14 crc kubenswrapper[4808]: I0217 15:55:14.990271 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:14Z","lastTransitionTime":"2026-02-17T15:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.094565 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.094667 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.094685 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.094712 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.094734 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.142827 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 07:39:41.082982418 +0000 UTC Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.145736 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.145806 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:15 crc kubenswrapper[4808]: E0217 15:55:15.145970 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:15 crc kubenswrapper[4808]: E0217 15:55:15.146130 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.146278 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.146304 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:15 crc kubenswrapper[4808]: E0217 15:55:15.146493 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:15 crc kubenswrapper[4808]: E0217 15:55:15.146873 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.198697 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.198788 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.198812 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.198843 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.198864 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.304503 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.304606 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.304630 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.304662 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.304687 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.441856 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.441926 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.441944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.441979 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.442010 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.545620 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.545673 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.545692 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.545721 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.545741 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.649730 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.649816 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.649837 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.649864 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.649883 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.753681 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.753780 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.753810 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.753852 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.753877 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.857352 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.857441 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.857459 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.857492 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.857514 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.961621 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.961688 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.961708 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.961735 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:15 crc kubenswrapper[4808]: I0217 15:55:15.961754 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:15Z","lastTransitionTime":"2026-02-17T15:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.066849 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.066932 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.066943 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.066963 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.066975 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.143224 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:48:35.189546079 +0000 UTC Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.172184 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.172243 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.172255 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.172296 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.172310 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.276110 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.276166 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.276185 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.276215 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.276233 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.379643 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.379725 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.379746 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.379779 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.379808 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.483677 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.483739 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.483756 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.483781 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.483800 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.587683 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.587760 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.587778 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.587802 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.587818 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.691162 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.691244 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.691266 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.691297 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.691316 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.794805 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.794908 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.794939 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.794978 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.795004 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.901109 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.901154 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.901189 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.901213 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:16 crc kubenswrapper[4808]: I0217 15:55:16.901229 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:16Z","lastTransitionTime":"2026-02-17T15:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.003871 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.003940 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.003954 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.003974 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.003986 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.106835 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.106915 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.106936 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.106970 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.106994 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.144463 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:54:30.415624609 +0000 UTC Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.144868 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.145099 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:17 crc kubenswrapper[4808]: E0217 15:55:17.145095 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.145304 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.145433 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:17 crc kubenswrapper[4808]: E0217 15:55:17.145532 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:17 crc kubenswrapper[4808]: E0217 15:55:17.145729 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:17 crc kubenswrapper[4808]: E0217 15:55:17.145789 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.147845 4808 scope.go:117] "RemoveContainer" containerID="5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.167200 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.183392 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.200079 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.211024 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.211091 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.211112 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.211190 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.211281 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.219886 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.236145 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.263234 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.280286 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.297879 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:05Z\\\",\\\"message\\\":\\\"2026-02-17T15:54:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b\\\\n2026-02-17T15:54:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b to /host/opt/cni/bin/\\\\n2026-02-17T15:54:20Z [verbose] multus-daemon started\\\\n2026-02-17T15:54:20Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.315055 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.315171 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.315196 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.315227 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.315251 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.332488 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.354639 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.379784 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.400682 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.418044 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.419727 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.419775 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.419789 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.419810 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.419824 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.435766 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.453021 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.469957 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.483168 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.522863 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.523289 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.523420 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.523559 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.523728 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.627560 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.627686 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.627713 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.627751 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.627775 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.693898 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/2.log" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.701269 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.702009 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.724799 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.730948 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.730990 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.731008 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.731030 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.731044 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.746119 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.768031 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.889564 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.889619 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.889630 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.889648 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.889661 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:17Z","lastTransitionTime":"2026-02-17T15:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.888518 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.914515 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.931265 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:05Z\\\",\\\"message\\\":\\\"2026-02-17T15:54:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b\\\\n2026-02-17T15:54:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b to /host/opt/cni/bin/\\\\n2026-02-17T15:54:20Z [verbose] multus-daemon started\\\\n2026-02-17T15:54:20Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.959399 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.974902 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:17 crc kubenswrapper[4808]: I0217 15:55:17.990533 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:17Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.002587 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.002635 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.002648 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.002693 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.002706 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.011110 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.025704 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.037946 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.053865 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.081756 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.099105 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.105563 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.105662 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.105684 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.105714 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.105736 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.115924 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.136641 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.144659 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:12:38.599292047 +0000 UTC Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.158188 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.208563 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.208653 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.208672 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.208701 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.208721 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.311941 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.311999 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.312018 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.312046 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.312064 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.415398 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.415488 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.415512 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.415544 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.415564 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.519065 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.519138 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.519158 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.519192 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.519215 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.622661 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.622717 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.622733 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.622759 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.622775 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.709287 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/3.log" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.710348 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/2.log" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.714778 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" exitCode=1 Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.714950 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.715040 4808 scope.go:117] "RemoveContainer" containerID="5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.716121 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 15:55:18 crc kubenswrapper[4808]: E0217 15:55:18.716405 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.726149 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.726298 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.726322 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.726397 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.726432 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.742141 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.760294 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.784160 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.808615 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.830886 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.830961 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.830981 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.831010 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.831029 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.847528 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d307d637e95a78d79b622b1de7d0ed293b2e0e690f6b661e6f8ed1c3ab91673\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:54:47Z\\\",\\\"message\\\":\\\"s{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0217 15:54:47.336335 6443 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0217 15:54:47.336345 6443 services_controller.go:445] Built service openshift-console-operator/metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0217 15:54:47.336359 6443 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:54:47Z is after 2025-08-24T17:21:41Z]\\\\nI0217 15:54:47.336366 6443 services_controller.go:451] Built service openshift-consol\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"il), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:18.361927 6847 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0217 15:55:18.362067 6847 services_controller.go:452] Built service openshift-operator-lifecycle-manager/olm-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:18.362078 6847 services_controller.go:454] Service openshift-network-diagnostics/network-check-target for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0217 15:55:18.362100 6847 services_controller.go:453] Built service openshift-operator-lifecycle-manager/olm-operator-metrics template LB for network=default: []services.LB{}\\\\nF0217 15:55:18.362112 6847 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.869636 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.886510 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03f8049-78a3-4d6f-a6a2-894fc1a93f11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670ac0bd1d8baf07179e911a15b5cb9c2137b2711e56c6a0243052ad67ff8ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878385dba8da392fa6524e2bd7051d00b7423ba16efe985229cc6e353f150159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878385dba8da392fa6524e2bd7051d00b7423ba16efe985229cc6e353f150159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.918549 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.934321 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.934376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.934389 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.934409 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.934423 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:18Z","lastTransitionTime":"2026-02-17T15:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.945072 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.968329 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:18 crc kubenswrapper[4808]: I0217 15:55:18.986515 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:18Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.013089 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:05Z\\\",\\\"message\\\":\\\"2026-02-17T15:54:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b\\\\n2026-02-17T15:54:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b to /host/opt/cni/bin/\\\\n2026-02-17T15:54:20Z [verbose] multus-daemon started\\\\n2026-02-17T15:54:20Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.032724 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.040123 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.040166 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.040179 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.040206 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.040223 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.082307 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.103013 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.121262 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.140221 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.144825 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.144909 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.144924 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:05:27.892137941 +0000 UTC Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.144825 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:19 crc kubenswrapper[4808]: E0217 15:55:19.145083 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.145161 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:19 crc kubenswrapper[4808]: E0217 15:55:19.145295 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.145496 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: E0217 15:55:19.145500 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.145535 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.145651 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.145688 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: E0217 15:55:19.145692 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.145714 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.168214 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.249135 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.249212 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.249230 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.249261 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.249280 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.353882 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.353944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.353963 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.353992 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.354008 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.457491 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.457535 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.457550 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.457592 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.457607 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.560816 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.560851 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.560860 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.560880 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.560890 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.664617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.664667 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.664678 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.664697 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.664711 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.723203 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/3.log" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.730064 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 15:55:19 crc kubenswrapper[4808]: E0217 15:55:19.730884 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.750724 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.768179 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.768245 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.768271 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.768309 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.768339 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.773015 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6556f8ef16656338bd11e718549ef3c019e96928825ab9dc0596f24b8f43e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbc64aec6f296c59b9fb1e8c183c9f80c346f2d76620db59376c914ffcec02b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.788980 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-f8pfh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13cb51e0-9eb4-4948-a9bf-93cddaa429fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e67e9f34fe5e5e9f272673e47a80dfec89a2832289e719b09d5a13399412b2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkcvd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-f8pfh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.813991 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-msgfd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18916d6d-e063-40a0-816f-554f95cd2956\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:55:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:05Z\\\",\\\"message\\\":\\\"2026-02-17T15:54:20+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b\\\\n2026-02-17T15:54:20+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c64dd7e9-22dc-4a6f-a49b-f38d3cbe118b to /host/opt/cni/bin/\\\\n2026-02-17T15:54:20Z [verbose] multus-daemon started\\\\n2026-02-17T15:54:20Z [verbose] Readiness Indicator file check\\\\n2026-02-17T15:55:05Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qmn2s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-msgfd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.848119 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5748f02a-e3dd-47c7-b89d-b472c718e593\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-17T15:55:18Z\\\",\\\"message\\\":\\\"il), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0217 15:55:18.361927 6847 services_controller.go:453] Built service openshift-network-diagnostics/network-check-target template LB for network=default: []services.LB{}\\\\nI0217 15:55:18.362067 6847 services_controller.go:452] Built service openshift-operator-lifecycle-manager/olm-operator-metrics per-node LB for network=default: []services.LB{}\\\\nI0217 15:55:18.362078 6847 services_controller.go:454] Service openshift-network-diagnostics/network-check-target for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0217 15:55:18.362100 6847 services_controller.go:453] Built service openshift-operator-lifecycle-manager/olm-operator-metrics template LB for network=default: []services.LB{}\\\\nF0217 15:55:18.362112 6847 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:55:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qnzj8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tgvlh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.868790 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"759d5f61-7cb6-48e5-878f-b6598b2e3736\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4372c35d9db61ec94e0ea9eacf8c4e39b960530780a05f7d69ef2a050d38d23b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d7c05a68a98372cde4e26c0c61f336641b7554e44bea9c4d240fed31e6b366b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://defa2be2862e24dfc99982183beaa92c8114cc81036544f19ed8bb4e10b0b09a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51962c47ab63116fa62604c3cc5603db1b7b4015519052616c363dc21c7cb913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.873601 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.873656 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.873674 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.873702 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.873723 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.888505 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d03f8049-78a3-4d6f-a6a2-894fc1a93f11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670ac0bd1d8baf07179e911a15b5cb9c2137b2711e56c6a0243052ad67ff8ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878385dba8da392fa6524e2bd7051d00b7423ba16efe985229cc6e353f150159\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878385dba8da392fa6524e2bd7051d00b7423ba16efe985229cc6e353f150159\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.910071 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.928478 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"067d21e4-9618-42af-bb01-1ea41d1bd7ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcb207e998564484db273e9e68e20e49fb986fc4644b656e17b5c3fea9fb4eb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded2fa969b96132c1a5953da41b9418ec78621261888216b3854bc3cacb7bca6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjv2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-86pl6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.946797 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pr5s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4989dd6-5d44-42b5-882c-12a10ffc7911\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://228e9f46385cedf80299c68685a8b2b94d96c41ade18eeea5de7a83c648cf704\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2xc9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pr5s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.965460 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b88c3e5f-7390-477c-ae74-aced26a8ddf9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8f79s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z8tn8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.980312 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.980404 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.980425 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.980458 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.980486 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:19Z","lastTransitionTime":"2026-02-17T15:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:19 crc kubenswrapper[4808]: I0217 15:55:19.993120 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"efd34c89-7350-4ce0-83d9-302614df88f7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-17T15:54:16Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0217 15:54:01.029442 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0217 15:54:01.030078 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660512818/tls.crt::/tmp/serving-cert-2660512818/tls.key\\\\\\\"\\\\nI0217 15:54:16.361222 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0217 15:54:16.370125 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0217 15:54:16.370169 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0217 15:54:16.370202 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0217 15:54:16.370212 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0217 15:54:16.383437 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0217 15:54:16.383473 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383482 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0217 15:54:16.383488 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0217 15:54:16.383494 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0217 15:54:16.383498 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0217 15:54:16.383502 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0217 15:54:16.383616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0217 15:54:16.393934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:00Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:53:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:19Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.012819 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3aaaa97d92e1acc8fe17594a75ed3e720801983ea175873486102bca899d9c04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.039875 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5cb9af7fe50ad534e758ba5647e162dfc951f41f07330e8b671427811de556\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.061193 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca38b6e7-b21c-453d-8b6c-a163dac84b35\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14df09051221e795ef203b228b1f61d67e86d8052d81b4853a27d50d2b6e64bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bm52q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k8v8k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.081809 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e109410f-af42-4d80-bf58-9af3a5dde09a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2fd52f8fe1e994b2f877ce0843ce86d86d7674bace8c4ca163e3232248313435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c45de72b21abdab0a1073a9a1a357c8d593f68a339bf9b455b5e87aa7863aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://59dcbb2be526e98cfd0a3c8cf833d6cfdef0120c58b47e52fb62f56adffb1d9c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:53:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.084836 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.084917 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.084941 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.085116 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.085145 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.102844 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.127754 4808 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6c9480c-4161-4c38-bec1-0822c6692f6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-17T15:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53d750dff2e0aa3d65e2defbc3cdf44f48375946c7021c0b1e1056b5ed7d729e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-17T15:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f7ff08c4b4644f5ccdd318fbaa9d5d1083d60393529f7f3e03cefbf701f178d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8d4091ef21fb9fef52dafcd7f1d0e865ff57652fcb75d0ba1e16361bcb81f44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26ac79dab2ec2e8e379a62382daa37e5c1feaa0666d3c6426bd9a295c64fdd5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43f3b959a4804631ce679ee8dd89b1fa9249892328d303865de288a5a7529af8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf535fc0e39f67860383b43629a84bb4608a6a5d42304c537ab91a306ed841c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89610759cc77f66154699ee9784109cba8ce21818125f447368e19fb6cc8cfb4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-17T15:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-17T15:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7t282\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-17T15:54:18Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kx4nl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-17T15:55:20Z is after 2025-08-24T17:21:41Z" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.145419 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 15:49:53.651855464 +0000 UTC Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.188561 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.188717 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.188738 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.188955 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.188986 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.292561 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.292673 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.292691 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.292721 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.292742 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.396481 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.396527 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.396537 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.396560 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.396601 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.499406 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.499474 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.499484 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.499506 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.499519 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.602943 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.603287 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.603396 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.603553 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.603701 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.707700 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.707794 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.707813 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.707839 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.707855 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.810634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.810691 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.810742 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.810772 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.810791 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.913916 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.913994 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.914013 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.914040 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:20 crc kubenswrapper[4808]: I0217 15:55:20.914059 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:20Z","lastTransitionTime":"2026-02-17T15:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.017659 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.017725 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.017745 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.017773 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.017803 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.034922 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.035145 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:25.035103539 +0000 UTC m=+148.551462652 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.121844 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.121944 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.121976 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.122003 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.122023 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.137129 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.137243 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.137285 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.137349 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137403 4808 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137541 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:25.137507351 +0000 UTC m=+148.653866454 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137541 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137554 4808 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137625 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137631 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137658 4808 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137679 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137704 4808 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137708 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:25.137681175 +0000 UTC m=+148.654040278 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137747 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:25.137728296 +0000 UTC m=+148.654087399 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.137782 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:25.137766247 +0000 UTC m=+148.654125360 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.145146 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.145196 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.145159 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.145361 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.145409 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.145607 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.145674 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 19:48:27.417156686 +0000 UTC Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.145792 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:21 crc kubenswrapper[4808]: E0217 15:55:21.145861 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.225782 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.225833 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.225848 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.225950 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.225973 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.330027 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.330097 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.330116 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.330143 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.330162 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.433466 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.433531 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.433547 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.433597 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.433617 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.536860 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.536938 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.536955 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.536988 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.537008 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.640637 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.640716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.640734 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.640761 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.640780 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.743469 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.743539 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.743557 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.743634 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.743666 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.846617 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.846697 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.846716 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.846747 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.846767 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.951278 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.951338 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.951352 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.951376 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:21 crc kubenswrapper[4808]: I0217 15:55:21.951394 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:21Z","lastTransitionTime":"2026-02-17T15:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.055148 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.055227 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.055247 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.055280 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.055304 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.146432 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 13:41:32.460563848 +0000 UTC Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.158393 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.158450 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.158468 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.158492 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.158514 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.261341 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.261381 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.261399 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.261433 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.261469 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.364727 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.364785 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.364809 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.364837 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.364860 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.468059 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.468140 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.468158 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.468180 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.468198 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.571490 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.571542 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.571561 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.571608 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.571627 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.675169 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.675243 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.675265 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.675297 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.675322 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.778748 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.778810 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.778829 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.778856 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.778876 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.882129 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.882207 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.882220 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.882242 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.882262 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.986455 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.986560 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.986622 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.986714 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:22 crc kubenswrapper[4808]: I0217 15:55:22.986735 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:22Z","lastTransitionTime":"2026-02-17T15:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.090438 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.090540 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.090568 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.090680 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.090726 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.145421 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.145483 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.145521 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:23 crc kubenswrapper[4808]: E0217 15:55:23.145650 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.145727 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:23 crc kubenswrapper[4808]: E0217 15:55:23.145856 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:23 crc kubenswrapper[4808]: E0217 15:55:23.145979 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:23 crc kubenswrapper[4808]: E0217 15:55:23.146254 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.147385 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:41:42.693545656 +0000 UTC Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.193603 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.193649 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.193659 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.193681 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.193699 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.297365 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.297412 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.297423 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.297442 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.297456 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.400987 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.401037 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.401056 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.401085 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.401104 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.504410 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.504456 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.504473 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.504495 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.504514 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.608320 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.608377 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.608388 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.608406 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.608419 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.711758 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.711824 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.711841 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.711870 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.711889 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.815911 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.815961 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.815973 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.815990 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.816004 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.919896 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.919954 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.919965 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.920006 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:23 crc kubenswrapper[4808]: I0217 15:55:23.920019 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:23Z","lastTransitionTime":"2026-02-17T15:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.023558 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.023658 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.023668 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.023708 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.023723 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.127218 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.127284 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.127302 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.127329 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.127347 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.147899 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 16:42:57.678122673 +0000 UTC Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.238489 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.238673 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.238706 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.238746 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.238775 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.341975 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.342056 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.342078 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.342108 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.342134 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.445636 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.445720 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.445740 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.445767 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.445786 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.526102 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.526188 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.526246 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.526282 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.526304 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.564164 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.564195 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.564203 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.564219 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.564230 4808 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-17T15:55:24Z","lastTransitionTime":"2026-02-17T15:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.618464 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76"] Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.619226 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.622023 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.622360 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.622448 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.623734 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.678973 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2737fdbb-be6e-4b06-bdf6-43aeb1186369-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.679066 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2737fdbb-be6e-4b06-bdf6-43aeb1186369-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.679160 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2737fdbb-be6e-4b06-bdf6-43aeb1186369-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.679280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2737fdbb-be6e-4b06-bdf6-43aeb1186369-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.679314 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2737fdbb-be6e-4b06-bdf6-43aeb1186369-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.692040 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-msgfd" podStartSLOduration=67.692024378 podStartE2EDuration="1m7.692024378s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.653840147 +0000 UTC m=+88.170199230" watchObservedRunningTime="2026-02-17 15:55:24.692024378 +0000 UTC m=+88.208383461" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.709498 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.70947155 podStartE2EDuration="35.70947155s" podCreationTimestamp="2026-02-17 15:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.708906636 +0000 UTC m=+88.225265749" watchObservedRunningTime="2026-02-17 15:55:24.70947155 +0000 UTC m=+88.225830633" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.725836 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=6.725811393 podStartE2EDuration="6.725811393s" podCreationTimestamp="2026-02-17 15:55:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.725159986 +0000 UTC m=+88.241519089" watchObservedRunningTime="2026-02-17 15:55:24.725811393 +0000 UTC m=+88.242170506" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.780707 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2737fdbb-be6e-4b06-bdf6-43aeb1186369-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.780814 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2737fdbb-be6e-4b06-bdf6-43aeb1186369-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.780849 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2737fdbb-be6e-4b06-bdf6-43aeb1186369-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.780941 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2737fdbb-be6e-4b06-bdf6-43aeb1186369-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.780993 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2737fdbb-be6e-4b06-bdf6-43aeb1186369-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.781101 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2737fdbb-be6e-4b06-bdf6-43aeb1186369-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.781175 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2737fdbb-be6e-4b06-bdf6-43aeb1186369-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.783484 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2737fdbb-be6e-4b06-bdf6-43aeb1186369-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.791902 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2737fdbb-be6e-4b06-bdf6-43aeb1186369-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.806625 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2737fdbb-be6e-4b06-bdf6-43aeb1186369-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rpl76\" (UID: \"2737fdbb-be6e-4b06-bdf6-43aeb1186369\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.839412 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-f8pfh" podStartSLOduration=68.839374181 podStartE2EDuration="1m8.839374181s" podCreationTimestamp="2026-02-17 15:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.81933565 +0000 UTC m=+88.335694783" watchObservedRunningTime="2026-02-17 15:55:24.839374181 +0000 UTC m=+88.355733254" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.839614 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-86pl6" podStartSLOduration=66.839609837 podStartE2EDuration="1m6.839609837s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.838822446 +0000 UTC m=+88.355181559" watchObservedRunningTime="2026-02-17 15:55:24.839609837 +0000 UTC m=+88.355968910" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.882413 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.8823759 podStartE2EDuration="1m8.8823759s" podCreationTimestamp="2026-02-17 15:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.862793811 +0000 UTC m=+88.379152964" watchObservedRunningTime="2026-02-17 15:55:24.8823759 +0000 UTC m=+88.398735003" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.895069 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pr5s4" podStartSLOduration=67.895045375 podStartE2EDuration="1m7.895045375s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.894668415 +0000 UTC m=+88.411027588" watchObservedRunningTime="2026-02-17 15:55:24.895045375 +0000 UTC m=+88.411404448" Feb 17 15:55:24 crc kubenswrapper[4808]: I0217 15:55:24.939785 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" Feb 17 15:55:24 crc kubenswrapper[4808]: W0217 15:55:24.989881 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2737fdbb_be6e_4b06_bdf6_43aeb1186369.slice/crio-37bcfc963470957993f2590642cd56327641dde3bb2684fd123bbe6036cc7481 WatchSource:0}: Error finding container 37bcfc963470957993f2590642cd56327641dde3bb2684fd123bbe6036cc7481: Status 404 returned error can't find the container with id 37bcfc963470957993f2590642cd56327641dde3bb2684fd123bbe6036cc7481 Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.001196 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=65.001171226 podStartE2EDuration="1m5.001171226s" podCreationTimestamp="2026-02-17 15:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:24.977969241 +0000 UTC m=+88.494328334" watchObservedRunningTime="2026-02-17 15:55:25.001171226 +0000 UTC m=+88.517530309" Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.040018 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-kx4nl" podStartSLOduration=68.039994734 podStartE2EDuration="1m8.039994734s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:25.039943092 +0000 UTC m=+88.556302235" watchObservedRunningTime="2026-02-17 15:55:25.039994734 +0000 UTC m=+88.556353817" Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.145282 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:25 crc kubenswrapper[4808]: E0217 15:55:25.145423 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.145521 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.145289 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:25 crc kubenswrapper[4808]: E0217 15:55:25.145762 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:25 crc kubenswrapper[4808]: E0217 15:55:25.146204 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.146482 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:25 crc kubenswrapper[4808]: E0217 15:55:25.146822 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.148004 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:20:22.349480065 +0000 UTC Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.148060 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.155536 4808 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.754207 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" event={"ID":"2737fdbb-be6e-4b06-bdf6-43aeb1186369","Type":"ContainerStarted","Data":"42737538f82a4ba95a740ff938504a0e1c236bf7b0e67b94a50d9b0fab529bab"} Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.754306 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" event={"ID":"2737fdbb-be6e-4b06-bdf6-43aeb1186369","Type":"ContainerStarted","Data":"37bcfc963470957993f2590642cd56327641dde3bb2684fd123bbe6036cc7481"} Feb 17 15:55:25 crc kubenswrapper[4808]: I0217 15:55:25.775920 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podStartSLOduration=68.775887614 podStartE2EDuration="1m8.775887614s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:25.055790463 +0000 UTC m=+88.572149606" watchObservedRunningTime="2026-02-17 15:55:25.775887614 +0000 UTC m=+89.292246697" Feb 17 15:55:27 crc kubenswrapper[4808]: I0217 15:55:27.145557 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:27 crc kubenswrapper[4808]: I0217 15:55:27.145663 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:27 crc kubenswrapper[4808]: E0217 15:55:27.145795 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:27 crc kubenswrapper[4808]: I0217 15:55:27.145881 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:27 crc kubenswrapper[4808]: E0217 15:55:27.156354 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:27 crc kubenswrapper[4808]: I0217 15:55:27.156699 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:27 crc kubenswrapper[4808]: E0217 15:55:27.157330 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:27 crc kubenswrapper[4808]: E0217 15:55:27.156812 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:29 crc kubenswrapper[4808]: I0217 15:55:29.144972 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:29 crc kubenswrapper[4808]: I0217 15:55:29.145019 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:29 crc kubenswrapper[4808]: I0217 15:55:29.145033 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:29 crc kubenswrapper[4808]: E0217 15:55:29.145117 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:29 crc kubenswrapper[4808]: E0217 15:55:29.145301 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:29 crc kubenswrapper[4808]: E0217 15:55:29.145660 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:29 crc kubenswrapper[4808]: I0217 15:55:29.146794 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:29 crc kubenswrapper[4808]: E0217 15:55:29.147056 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:30 crc kubenswrapper[4808]: I0217 15:55:30.146434 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 15:55:30 crc kubenswrapper[4808]: E0217 15:55:30.146778 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:55:31 crc kubenswrapper[4808]: I0217 15:55:31.145668 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:31 crc kubenswrapper[4808]: I0217 15:55:31.145806 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:31 crc kubenswrapper[4808]: E0217 15:55:31.145878 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:31 crc kubenswrapper[4808]: E0217 15:55:31.146053 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:31 crc kubenswrapper[4808]: I0217 15:55:31.145806 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:31 crc kubenswrapper[4808]: E0217 15:55:31.146286 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:31 crc kubenswrapper[4808]: I0217 15:55:31.146379 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:31 crc kubenswrapper[4808]: E0217 15:55:31.146623 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:33 crc kubenswrapper[4808]: I0217 15:55:33.144834 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:33 crc kubenswrapper[4808]: I0217 15:55:33.144882 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:33 crc kubenswrapper[4808]: I0217 15:55:33.144866 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:33 crc kubenswrapper[4808]: I0217 15:55:33.144819 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:33 crc kubenswrapper[4808]: E0217 15:55:33.145019 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:33 crc kubenswrapper[4808]: E0217 15:55:33.145172 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:33 crc kubenswrapper[4808]: E0217 15:55:33.145303 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:33 crc kubenswrapper[4808]: E0217 15:55:33.145418 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:35 crc kubenswrapper[4808]: I0217 15:55:35.145816 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:35 crc kubenswrapper[4808]: I0217 15:55:35.145942 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:35 crc kubenswrapper[4808]: I0217 15:55:35.146034 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:35 crc kubenswrapper[4808]: E0217 15:55:35.146032 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:35 crc kubenswrapper[4808]: E0217 15:55:35.146195 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:35 crc kubenswrapper[4808]: I0217 15:55:35.146268 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:35 crc kubenswrapper[4808]: E0217 15:55:35.146357 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:35 crc kubenswrapper[4808]: E0217 15:55:35.146435 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:35 crc kubenswrapper[4808]: I0217 15:55:35.729283 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:35 crc kubenswrapper[4808]: E0217 15:55:35.729671 4808 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:35 crc kubenswrapper[4808]: E0217 15:55:35.730142 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs podName:b88c3e5f-7390-477c-ae74-aced26a8ddf9 nodeName:}" failed. No retries permitted until 2026-02-17 15:56:39.730108328 +0000 UTC m=+163.246467431 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs") pod "network-metrics-daemon-z8tn8" (UID: "b88c3e5f-7390-477c-ae74-aced26a8ddf9") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 17 15:55:37 crc kubenswrapper[4808]: I0217 15:55:37.145466 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:37 crc kubenswrapper[4808]: I0217 15:55:37.145522 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:37 crc kubenswrapper[4808]: I0217 15:55:37.146958 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:37 crc kubenswrapper[4808]: I0217 15:55:37.147024 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:37 crc kubenswrapper[4808]: E0217 15:55:37.147227 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:37 crc kubenswrapper[4808]: E0217 15:55:37.147378 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:37 crc kubenswrapper[4808]: E0217 15:55:37.147446 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:37 crc kubenswrapper[4808]: E0217 15:55:37.147730 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:39 crc kubenswrapper[4808]: I0217 15:55:39.144967 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:39 crc kubenswrapper[4808]: I0217 15:55:39.145063 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:39 crc kubenswrapper[4808]: I0217 15:55:39.144967 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:39 crc kubenswrapper[4808]: I0217 15:55:39.145175 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:39 crc kubenswrapper[4808]: E0217 15:55:39.145724 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:39 crc kubenswrapper[4808]: E0217 15:55:39.145878 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:39 crc kubenswrapper[4808]: E0217 15:55:39.145997 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:39 crc kubenswrapper[4808]: E0217 15:55:39.146084 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:39 crc kubenswrapper[4808]: I0217 15:55:39.163221 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rpl76" podStartSLOduration=82.16319733 podStartE2EDuration="1m22.16319733s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:25.777770523 +0000 UTC m=+89.294129606" watchObservedRunningTime="2026-02-17 15:55:39.16319733 +0000 UTC m=+102.679556413" Feb 17 15:55:39 crc kubenswrapper[4808]: I0217 15:55:39.164298 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 17 15:55:41 crc kubenswrapper[4808]: I0217 15:55:41.145181 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:41 crc kubenswrapper[4808]: I0217 15:55:41.145276 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:41 crc kubenswrapper[4808]: I0217 15:55:41.145372 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:41 crc kubenswrapper[4808]: E0217 15:55:41.146619 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:41 crc kubenswrapper[4808]: I0217 15:55:41.145405 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:41 crc kubenswrapper[4808]: E0217 15:55:41.146791 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:41 crc kubenswrapper[4808]: E0217 15:55:41.146786 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:41 crc kubenswrapper[4808]: E0217 15:55:41.146604 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:43 crc kubenswrapper[4808]: I0217 15:55:43.145341 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:43 crc kubenswrapper[4808]: E0217 15:55:43.145532 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:43 crc kubenswrapper[4808]: I0217 15:55:43.145367 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:43 crc kubenswrapper[4808]: I0217 15:55:43.145553 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:43 crc kubenswrapper[4808]: E0217 15:55:43.145879 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:43 crc kubenswrapper[4808]: E0217 15:55:43.146007 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:43 crc kubenswrapper[4808]: I0217 15:55:43.146790 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 15:55:43 crc kubenswrapper[4808]: E0217 15:55:43.146983 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:55:43 crc kubenswrapper[4808]: I0217 15:55:43.147023 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:43 crc kubenswrapper[4808]: E0217 15:55:43.147092 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:45 crc kubenswrapper[4808]: I0217 15:55:45.145512 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:45 crc kubenswrapper[4808]: I0217 15:55:45.145694 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:45 crc kubenswrapper[4808]: I0217 15:55:45.145687 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:45 crc kubenswrapper[4808]: E0217 15:55:45.145979 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:45 crc kubenswrapper[4808]: I0217 15:55:45.146070 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:45 crc kubenswrapper[4808]: E0217 15:55:45.146522 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:45 crc kubenswrapper[4808]: E0217 15:55:45.146695 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:45 crc kubenswrapper[4808]: E0217 15:55:45.146862 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:47 crc kubenswrapper[4808]: I0217 15:55:47.145225 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:47 crc kubenswrapper[4808]: I0217 15:55:47.145470 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:47 crc kubenswrapper[4808]: E0217 15:55:47.147412 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:47 crc kubenswrapper[4808]: I0217 15:55:47.147509 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:47 crc kubenswrapper[4808]: I0217 15:55:47.147648 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:47 crc kubenswrapper[4808]: E0217 15:55:47.147871 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:47 crc kubenswrapper[4808]: E0217 15:55:47.148018 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:47 crc kubenswrapper[4808]: E0217 15:55:47.148197 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:47 crc kubenswrapper[4808]: I0217 15:55:47.193552 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.193519713 podStartE2EDuration="8.193519713s" podCreationTimestamp="2026-02-17 15:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:55:47.189495967 +0000 UTC m=+110.705855100" watchObservedRunningTime="2026-02-17 15:55:47.193519713 +0000 UTC m=+110.709878816" Feb 17 15:55:49 crc kubenswrapper[4808]: I0217 15:55:49.144856 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:49 crc kubenswrapper[4808]: I0217 15:55:49.145018 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:49 crc kubenswrapper[4808]: E0217 15:55:49.145433 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:49 crc kubenswrapper[4808]: I0217 15:55:49.145546 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:49 crc kubenswrapper[4808]: E0217 15:55:49.145671 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:49 crc kubenswrapper[4808]: E0217 15:55:49.145871 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:49 crc kubenswrapper[4808]: I0217 15:55:49.145218 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:49 crc kubenswrapper[4808]: E0217 15:55:49.146137 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:51 crc kubenswrapper[4808]: I0217 15:55:51.144981 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:51 crc kubenswrapper[4808]: E0217 15:55:51.145186 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:51 crc kubenswrapper[4808]: I0217 15:55:51.145492 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:51 crc kubenswrapper[4808]: E0217 15:55:51.145630 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:51 crc kubenswrapper[4808]: I0217 15:55:51.145866 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:51 crc kubenswrapper[4808]: I0217 15:55:51.145870 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:51 crc kubenswrapper[4808]: E0217 15:55:51.145964 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:51 crc kubenswrapper[4808]: E0217 15:55:51.146133 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:52 crc kubenswrapper[4808]: I0217 15:55:52.867553 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/1.log" Feb 17 15:55:52 crc kubenswrapper[4808]: I0217 15:55:52.868383 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/0.log" Feb 17 15:55:52 crc kubenswrapper[4808]: I0217 15:55:52.868447 4808 generic.go:334] "Generic (PLEG): container finished" podID="18916d6d-e063-40a0-816f-554f95cd2956" containerID="7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e" exitCode=1 Feb 17 15:55:52 crc kubenswrapper[4808]: I0217 15:55:52.868495 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerDied","Data":"7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e"} Feb 17 15:55:52 crc kubenswrapper[4808]: I0217 15:55:52.868555 4808 scope.go:117] "RemoveContainer" containerID="d94a7bfe9ebc3fcec167acc2f840374566394d9425801a71bd3626ce196ee3a1" Feb 17 15:55:52 crc kubenswrapper[4808]: I0217 15:55:52.869350 4808 scope.go:117] "RemoveContainer" containerID="7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e" Feb 17 15:55:52 crc kubenswrapper[4808]: E0217 15:55:52.869645 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-msgfd_openshift-multus(18916d6d-e063-40a0-816f-554f95cd2956)\"" pod="openshift-multus/multus-msgfd" podUID="18916d6d-e063-40a0-816f-554f95cd2956" Feb 17 15:55:53 crc kubenswrapper[4808]: I0217 15:55:53.145263 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:53 crc kubenswrapper[4808]: I0217 15:55:53.145380 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:53 crc kubenswrapper[4808]: E0217 15:55:53.145492 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:53 crc kubenswrapper[4808]: E0217 15:55:53.145702 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:53 crc kubenswrapper[4808]: I0217 15:55:53.145298 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:53 crc kubenswrapper[4808]: E0217 15:55:53.145894 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:53 crc kubenswrapper[4808]: I0217 15:55:53.145272 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:53 crc kubenswrapper[4808]: E0217 15:55:53.146049 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:53 crc kubenswrapper[4808]: I0217 15:55:53.876486 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/1.log" Feb 17 15:55:55 crc kubenswrapper[4808]: I0217 15:55:55.145422 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:55 crc kubenswrapper[4808]: I0217 15:55:55.145475 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:55 crc kubenswrapper[4808]: E0217 15:55:55.145796 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:55 crc kubenswrapper[4808]: I0217 15:55:55.145536 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:55 crc kubenswrapper[4808]: I0217 15:55:55.145542 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:55 crc kubenswrapper[4808]: E0217 15:55:55.146057 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:55 crc kubenswrapper[4808]: E0217 15:55:55.146185 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:55 crc kubenswrapper[4808]: E0217 15:55:55.146329 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:57 crc kubenswrapper[4808]: E0217 15:55:57.129754 4808 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 17 15:55:57 crc kubenswrapper[4808]: I0217 15:55:57.144892 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:57 crc kubenswrapper[4808]: E0217 15:55:57.146770 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:57 crc kubenswrapper[4808]: I0217 15:55:57.146829 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:57 crc kubenswrapper[4808]: I0217 15:55:57.146839 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:57 crc kubenswrapper[4808]: I0217 15:55:57.146921 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:57 crc kubenswrapper[4808]: E0217 15:55:57.146955 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:55:57 crc kubenswrapper[4808]: E0217 15:55:57.147152 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:57 crc kubenswrapper[4808]: E0217 15:55:57.147412 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:57 crc kubenswrapper[4808]: E0217 15:55:57.254931 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:55:58 crc kubenswrapper[4808]: I0217 15:55:58.147069 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 15:55:58 crc kubenswrapper[4808]: E0217 15:55:58.147509 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tgvlh_openshift-ovn-kubernetes(5748f02a-e3dd-47c7-b89d-b472c718e593)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" Feb 17 15:55:59 crc kubenswrapper[4808]: I0217 15:55:59.145349 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:55:59 crc kubenswrapper[4808]: I0217 15:55:59.145434 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:55:59 crc kubenswrapper[4808]: I0217 15:55:59.145365 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:55:59 crc kubenswrapper[4808]: E0217 15:55:59.145638 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:55:59 crc kubenswrapper[4808]: E0217 15:55:59.145828 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:55:59 crc kubenswrapper[4808]: E0217 15:55:59.145961 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:55:59 crc kubenswrapper[4808]: I0217 15:55:59.145971 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:55:59 crc kubenswrapper[4808]: E0217 15:55:59.146292 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:01 crc kubenswrapper[4808]: I0217 15:56:01.145551 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:01 crc kubenswrapper[4808]: I0217 15:56:01.145712 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:01 crc kubenswrapper[4808]: I0217 15:56:01.145561 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:01 crc kubenswrapper[4808]: E0217 15:56:01.145828 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:01 crc kubenswrapper[4808]: I0217 15:56:01.145984 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:01 crc kubenswrapper[4808]: E0217 15:56:01.145964 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:01 crc kubenswrapper[4808]: E0217 15:56:01.146099 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:01 crc kubenswrapper[4808]: E0217 15:56:01.146275 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:02 crc kubenswrapper[4808]: E0217 15:56:02.256353 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:56:03 crc kubenswrapper[4808]: I0217 15:56:03.145813 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:03 crc kubenswrapper[4808]: E0217 15:56:03.146055 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:03 crc kubenswrapper[4808]: I0217 15:56:03.146849 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:03 crc kubenswrapper[4808]: I0217 15:56:03.146911 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:03 crc kubenswrapper[4808]: I0217 15:56:03.147034 4808 scope.go:117] "RemoveContainer" containerID="7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e" Feb 17 15:56:03 crc kubenswrapper[4808]: E0217 15:56:03.147036 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:03 crc kubenswrapper[4808]: E0217 15:56:03.147397 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:03 crc kubenswrapper[4808]: I0217 15:56:03.147562 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:03 crc kubenswrapper[4808]: E0217 15:56:03.148848 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:03 crc kubenswrapper[4808]: I0217 15:56:03.924309 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/1.log" Feb 17 15:56:03 crc kubenswrapper[4808]: I0217 15:56:03.924964 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerStarted","Data":"a6961e0c67ed7d26f44519f3b555fda05bf5219f4205ed2528b68394bcb91f2c"} Feb 17 15:56:05 crc kubenswrapper[4808]: I0217 15:56:05.145528 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:05 crc kubenswrapper[4808]: I0217 15:56:05.145653 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:05 crc kubenswrapper[4808]: E0217 15:56:05.145848 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:05 crc kubenswrapper[4808]: I0217 15:56:05.145551 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:05 crc kubenswrapper[4808]: I0217 15:56:05.145996 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:05 crc kubenswrapper[4808]: E0217 15:56:05.146165 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:05 crc kubenswrapper[4808]: E0217 15:56:05.146232 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:05 crc kubenswrapper[4808]: E0217 15:56:05.146363 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:07 crc kubenswrapper[4808]: I0217 15:56:07.145252 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:07 crc kubenswrapper[4808]: I0217 15:56:07.145417 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:07 crc kubenswrapper[4808]: E0217 15:56:07.147324 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:07 crc kubenswrapper[4808]: I0217 15:56:07.147386 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:07 crc kubenswrapper[4808]: I0217 15:56:07.147352 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:07 crc kubenswrapper[4808]: E0217 15:56:07.147479 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:07 crc kubenswrapper[4808]: E0217 15:56:07.147614 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:07 crc kubenswrapper[4808]: E0217 15:56:07.147707 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:07 crc kubenswrapper[4808]: E0217 15:56:07.257210 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:56:09 crc kubenswrapper[4808]: I0217 15:56:09.145797 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:09 crc kubenswrapper[4808]: I0217 15:56:09.145902 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:09 crc kubenswrapper[4808]: E0217 15:56:09.146012 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:09 crc kubenswrapper[4808]: I0217 15:56:09.146041 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:09 crc kubenswrapper[4808]: E0217 15:56:09.146107 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:09 crc kubenswrapper[4808]: I0217 15:56:09.146159 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:09 crc kubenswrapper[4808]: E0217 15:56:09.146371 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:09 crc kubenswrapper[4808]: E0217 15:56:09.146429 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.145630 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.145745 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.145891 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:11 crc kubenswrapper[4808]: E0217 15:56:11.145906 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.146002 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:11 crc kubenswrapper[4808]: E0217 15:56:11.146191 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:11 crc kubenswrapper[4808]: E0217 15:56:11.146320 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:11 crc kubenswrapper[4808]: E0217 15:56:11.146500 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.147786 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.969261 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/3.log" Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.973737 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerStarted","Data":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} Feb 17 15:56:11 crc kubenswrapper[4808]: I0217 15:56:11.974774 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:56:12 crc kubenswrapper[4808]: I0217 15:56:12.178695 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podStartSLOduration=115.178660586 podStartE2EDuration="1m55.178660586s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:12.026785759 +0000 UTC m=+135.543144882" watchObservedRunningTime="2026-02-17 15:56:12.178660586 +0000 UTC m=+135.695019699" Feb 17 15:56:12 crc kubenswrapper[4808]: I0217 15:56:12.180394 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-z8tn8"] Feb 17 15:56:12 crc kubenswrapper[4808]: I0217 15:56:12.180560 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:12 crc kubenswrapper[4808]: E0217 15:56:12.180815 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:12 crc kubenswrapper[4808]: E0217 15:56:12.259372 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 15:56:13 crc kubenswrapper[4808]: I0217 15:56:13.145403 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:13 crc kubenswrapper[4808]: I0217 15:56:13.145481 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:13 crc kubenswrapper[4808]: I0217 15:56:13.145403 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:13 crc kubenswrapper[4808]: E0217 15:56:13.145724 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:13 crc kubenswrapper[4808]: E0217 15:56:13.146101 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:13 crc kubenswrapper[4808]: E0217 15:56:13.146338 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:14 crc kubenswrapper[4808]: I0217 15:56:14.145373 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:14 crc kubenswrapper[4808]: E0217 15:56:14.145674 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:15 crc kubenswrapper[4808]: I0217 15:56:15.145763 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:15 crc kubenswrapper[4808]: I0217 15:56:15.145886 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:15 crc kubenswrapper[4808]: I0217 15:56:15.145890 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:15 crc kubenswrapper[4808]: E0217 15:56:15.146025 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:15 crc kubenswrapper[4808]: E0217 15:56:15.146281 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:15 crc kubenswrapper[4808]: E0217 15:56:15.146735 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:16 crc kubenswrapper[4808]: I0217 15:56:16.145033 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:16 crc kubenswrapper[4808]: E0217 15:56:16.145279 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z8tn8" podUID="b88c3e5f-7390-477c-ae74-aced26a8ddf9" Feb 17 15:56:17 crc kubenswrapper[4808]: I0217 15:56:17.145868 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:17 crc kubenswrapper[4808]: I0217 15:56:17.146022 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:17 crc kubenswrapper[4808]: E0217 15:56:17.147829 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 17 15:56:17 crc kubenswrapper[4808]: E0217 15:56:17.148015 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 17 15:56:17 crc kubenswrapper[4808]: I0217 15:56:17.148119 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:17 crc kubenswrapper[4808]: E0217 15:56:17.148295 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 17 15:56:18 crc kubenswrapper[4808]: I0217 15:56:18.144957 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:18 crc kubenswrapper[4808]: I0217 15:56:18.148550 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 15:56:18 crc kubenswrapper[4808]: I0217 15:56:18.150378 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 15:56:19 crc kubenswrapper[4808]: I0217 15:56:19.145631 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:19 crc kubenswrapper[4808]: I0217 15:56:19.145946 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:19 crc kubenswrapper[4808]: I0217 15:56:19.145960 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:19 crc kubenswrapper[4808]: I0217 15:56:19.148643 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 15:56:19 crc kubenswrapper[4808]: I0217 15:56:19.149137 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 15:56:19 crc kubenswrapper[4808]: I0217 15:56:19.149222 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 15:56:19 crc kubenswrapper[4808]: I0217 15:56:19.151110 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 15:56:21 crc kubenswrapper[4808]: I0217 15:56:21.593104 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:56:21 crc kubenswrapper[4808]: I0217 15:56:21.593210 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 15:56:24 crc kubenswrapper[4808]: I0217 15:56:24.362767 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.110617 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:25 crc kubenswrapper[4808]: E0217 15:56:25.110920 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:58:27.110868111 +0000 UTC m=+270.627227214 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.212812 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.212933 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.212998 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.213053 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.214816 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.224373 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.224938 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.228453 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.463636 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.468990 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.476528 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.695130 4808 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.733348 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7jp8q"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.733934 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-srhjb"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.734250 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.734656 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.737108 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.737538 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.737831 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.738103 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.739290 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.739556 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.746457 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.746629 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.746734 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.746833 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.746937 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.747087 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.747198 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.747335 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748034 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748195 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748224 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748390 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748451 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748592 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748766 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748895 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.748954 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.749091 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.749128 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.774705 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.774532 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.775051 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.780539 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4x6s2"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.781299 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.784969 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.785128 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.785260 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.785403 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.785565 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.787036 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.789474 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.789792 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.789985 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.790125 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.790333 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.790912 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.791125 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.791248 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.793737 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.794521 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j6dgq"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.797012 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.796600 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.797715 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.811621 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.811850 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.811888 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.812105 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.812136 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.812200 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.812487 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.814925 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mxgf8"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.815315 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cvqck"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.815517 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.817934 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.818773 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.818992 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.819168 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.819405 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.819678 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.819681 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.820953 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.820988 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10596b8a-e57a-498e-a7e8-e017fde34d54-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821007 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-audit\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821030 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821048 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-config\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821067 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xcvb\" (UniqueName: \"kubernetes.io/projected/b9a99858-5ada-47b7-855c-8d3b43ab9fee-kube-api-access-7xcvb\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821085 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d0ee93f1-93ac-4db2-b35e-5be5bded6541-node-pullsecrets\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821101 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b9a99858-5ada-47b7-855c-8d3b43ab9fee-machine-approver-tls\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821117 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/656b06bf-9660-4c18-941b-5e5589f0301a-config\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821133 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldfqj\" (UniqueName: \"kubernetes.io/projected/10596b8a-e57a-498e-a7e8-e017fde34d54-kube-api-access-ldfqj\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821154 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-config\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821169 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b5xt\" (UniqueName: \"kubernetes.io/projected/681a57d4-bd74-4910-a3f3-517b96a15123-kube-api-access-9b5xt\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821187 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-encryption-config\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821206 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-audit-policies\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821223 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9a99858-5ada-47b7-855c-8d3b43ab9fee-auth-proxy-config\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821239 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-image-import-ca\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821254 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-serving-cert\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821271 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10596b8a-e57a-498e-a7e8-e017fde34d54-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821287 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-encryption-config\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821303 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbmd2\" (UniqueName: \"kubernetes.io/projected/656b06bf-9660-4c18-941b-5e5589f0301a-kube-api-access-vbmd2\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821320 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821343 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-etcd-serving-ca\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821358 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/656b06bf-9660-4c18-941b-5e5589f0301a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821376 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-etcd-client\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821393 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-serving-cert\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821413 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8fth\" (UniqueName: \"kubernetes.io/projected/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-kube-api-access-s8fth\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821434 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-service-ca-bundle\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821502 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9a99858-5ada-47b7-855c-8d3b43ab9fee-config\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821589 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-etcd-client\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821616 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/656b06bf-9660-4c18-941b-5e5589f0301a-images\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821645 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-serving-cert\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821697 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxs6p\" (UniqueName: \"kubernetes.io/projected/d0ee93f1-93ac-4db2-b35e-5be5bded6541-kube-api-access-wxs6p\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821736 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0ee93f1-93ac-4db2-b35e-5be5bded6541-audit-dir\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.821753 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/681a57d4-bd74-4910-a3f3-517b96a15123-audit-dir\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.823182 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.825960 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.832244 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.832479 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.832623 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.835930 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.835992 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836197 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836282 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836382 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836422 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836504 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836546 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836658 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836785 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836888 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.836944 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.837029 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.837062 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.837151 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.837180 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.837269 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.837480 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.837604 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.838802 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.842851 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.843690 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.843906 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.844022 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.844216 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.844434 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.844507 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-wlj8d"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.845052 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wlj8d" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.845162 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-hdg74"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.845883 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.846153 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.846249 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.847176 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.847846 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.849217 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.850272 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.858642 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.861561 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.861707 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.862747 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.863472 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.863655 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.863656 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.878788 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.881504 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fmfh5"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.882984 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.884363 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.882987 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.887224 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.888648 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.890159 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.890780 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.891287 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.891561 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.891917 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.892114 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.892305 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.892510 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.892867 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.895364 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.899360 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-p8js4"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.913359 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.913961 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.915128 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.915141 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.915941 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.916343 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.917008 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2lsb7"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.917974 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.918246 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.918335 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.921000 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.921520 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.922302 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7jp8q"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.923424 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.924143 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.924783 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.925537 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/656b06bf-9660-4c18-941b-5e5589f0301a-config\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.925607 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-oauth-config\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926638 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-client-ca\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926673 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldfqj\" (UniqueName: \"kubernetes.io/projected/10596b8a-e57a-498e-a7e8-e017fde34d54-kube-api-access-ldfqj\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926695 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-oauth-serving-cert\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926716 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926742 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-serving-cert\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926764 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926788 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-config\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926811 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b5xt\" (UniqueName: \"kubernetes.io/projected/681a57d4-bd74-4910-a3f3-517b96a15123-kube-api-access-9b5xt\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926832 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c0b903-63ed-4811-a991-9a5751a4c640-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926852 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-client-ca\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926871 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-encryption-config\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926891 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-trusted-ca-bundle\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926912 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-trusted-ca\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926933 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-audit-policies\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926954 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9a99858-5ada-47b7-855c-8d3b43ab9fee-auth-proxy-config\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926976 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-service-ca\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926995 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-audit-policies\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927015 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nx4t\" (UniqueName: \"kubernetes.io/projected/8227d3a9-60f5-4d19-b4d1-8a0143864837-kube-api-access-6nx4t\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927035 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-image-import-ca\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927052 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-serving-cert\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927070 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10596b8a-e57a-498e-a7e8-e017fde34d54-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927089 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0131c573-bf76-49f4-9581-dd39ef60b27f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bz4bz\" (UID: \"0131c573-bf76-49f4-9581-dd39ef60b27f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927106 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8srf\" (UniqueName: \"kubernetes.io/projected/a7649915-6408-4c30-8faa-0fb3ea55007a-kube-api-access-v8srf\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927122 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-config\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.927474 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.926559 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/656b06bf-9660-4c18-941b-5e5589f0301a-config\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.925692 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.928909 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-encryption-config\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.928942 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbmd2\" (UniqueName: \"kubernetes.io/projected/656b06bf-9660-4c18-941b-5e5589f0301a-kube-api-access-vbmd2\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929074 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929094 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c0b903-63ed-4811-a991-9a5751a4c640-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929218 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929245 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929262 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-etcd-serving-ca\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929522 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929542 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7649915-6408-4c30-8faa-0fb3ea55007a-serving-cert\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929581 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-etcd-client\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929601 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-serving-cert\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929620 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/656b06bf-9660-4c18-941b-5e5589f0301a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929639 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-serving-cert\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929655 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-config\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929681 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwn6m\" (UniqueName: \"kubernetes.io/projected/9c7096e1-8ca1-483d-8e12-1cc79d28182a-kube-api-access-jwn6m\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929706 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8fth\" (UniqueName: \"kubernetes.io/projected/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-kube-api-access-s8fth\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929728 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwlfb\" (UniqueName: \"kubernetes.io/projected/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-kube-api-access-pwlfb\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929760 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-service-ca-bundle\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929821 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929891 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9a99858-5ada-47b7-855c-8d3b43ab9fee-config\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929912 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d3a9-60f5-4d19-b4d1-8a0143864837-serving-cert\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929933 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lnfm\" (UniqueName: \"kubernetes.io/projected/e489a46b-9123-44c6-94e0-692621760dd6-kube-api-access-6lnfm\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929972 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-etcd-client\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.929992 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/656b06bf-9660-4c18-941b-5e5589f0301a-images\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930014 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-serving-cert\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930072 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c7096e1-8ca1-483d-8e12-1cc79d28182a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930100 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxs6p\" (UniqueName: \"kubernetes.io/projected/d0ee93f1-93ac-4db2-b35e-5be5bded6541-kube-api-access-wxs6p\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930145 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-console-config\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930166 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5tzz\" (UniqueName: \"kubernetes.io/projected/c8c0b903-63ed-4811-a991-9a5751a4c640-kube-api-access-k5tzz\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930191 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930238 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0ee93f1-93ac-4db2-b35e-5be5bded6541-audit-dir\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930657 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z82w8"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931219 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931291 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.930189 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d0ee93f1-93ac-4db2-b35e-5be5bded6541-audit-dir\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931371 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931409 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/681a57d4-bd74-4910-a3f3-517b96a15123-audit-dir\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931430 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnnfd\" (UniqueName: \"kubernetes.io/projected/0131c573-bf76-49f4-9581-dd39ef60b27f-kube-api-access-pnnfd\") pod \"cluster-samples-operator-665b6dd947-bz4bz\" (UID: \"0131c573-bf76-49f4-9581-dd39ef60b27f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931449 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931469 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-config\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931493 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c7096e1-8ca1-483d-8e12-1cc79d28182a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931517 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931538 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw8ff\" (UniqueName: \"kubernetes.io/projected/33978535-84b2-4def-af5a-d2819171e202-kube-api-access-hw8ff\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931557 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c7096e1-8ca1-483d-8e12-1cc79d28182a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931613 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931638 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931665 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10596b8a-e57a-498e-a7e8-e017fde34d54-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931694 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-audit\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931713 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931741 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931767 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fttb4\" (UniqueName: \"kubernetes.io/projected/116ae5bc-cf7e-45ad-9800-501bcfc04ff7-kube-api-access-fttb4\") pod \"downloads-7954f5f757-wlj8d\" (UID: \"116ae5bc-cf7e-45ad-9800-501bcfc04ff7\") " pod="openshift-console/downloads-7954f5f757-wlj8d" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931785 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-config\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931805 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xcvb\" (UniqueName: \"kubernetes.io/projected/b9a99858-5ada-47b7-855c-8d3b43ab9fee-kube-api-access-7xcvb\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931826 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d0ee93f1-93ac-4db2-b35e-5be5bded6541-node-pullsecrets\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931849 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b9a99858-5ada-47b7-855c-8d3b43ab9fee-machine-approver-tls\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931866 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33978535-84b2-4def-af5a-d2819171e202-audit-dir\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931884 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.932010 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.932077 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/681a57d4-bd74-4910-a3f3-517b96a15123-audit-dir\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.931491 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.932786 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.933544 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.933951 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-audit-policies\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.934398 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-encryption-config\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.934467 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-config\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.935426 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-image-import-ca\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.936264 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/656b06bf-9660-4c18-941b-5e5589f0301a-images\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.937009 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.937236 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10596b8a-e57a-498e-a7e8-e017fde34d54-config\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.937737 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-audit\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.938787 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.939568 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.940697 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-etcd-client\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.941223 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9a99858-5ada-47b7-855c-8d3b43ab9fee-config\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.943656 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-serving-cert\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.945402 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/656b06bf-9660-4c18-941b-5e5589f0301a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.945652 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.945706 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.945722 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.946561 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.946912 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9a99858-5ada-47b7-855c-8d3b43ab9fee-auth-proxy-config\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.947928 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0ee93f1-93ac-4db2-b35e-5be5bded6541-serving-cert\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.948173 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d0ee93f1-93ac-4db2-b35e-5be5bded6541-node-pullsecrets\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.948433 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.950457 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.951415 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-jwcd2"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.951620 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d0ee93f1-93ac-4db2-b35e-5be5bded6541-etcd-serving-ca\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.951998 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.952070 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.952970 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbr84"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.953561 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.955772 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-encryption-config\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.956635 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/681a57d4-bd74-4910-a3f3-517b96a15123-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.959647 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.960272 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.961991 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b9a99858-5ada-47b7-855c-8d3b43ab9fee-machine-approver-tls\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.965714 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bqslk"] Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.989288 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.990333 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10596b8a-e57a-498e-a7e8-e017fde34d54-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.990821 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/681a57d4-bd74-4910-a3f3-517b96a15123-etcd-client\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.991616 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-service-ca-bundle\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.992799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.997049 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-config\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:25 crc kubenswrapper[4808]: I0217 15:56:25.997377 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-serving-cert\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.009427 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cvqck"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.009522 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.009542 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mxgf8"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.009627 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.009644 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.011237 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.011506 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.012751 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.023619 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.023689 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.023807 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.025865 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-dgt46"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.026072 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.028227 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.028470 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.034964 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.035034 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2lsb7"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.035071 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-srhjb"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.035253 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036439 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-p8js4"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036541 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036598 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036630 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnnfd\" (UniqueName: \"kubernetes.io/projected/0131c573-bf76-49f4-9581-dd39ef60b27f-kube-api-access-pnnfd\") pod \"cluster-samples-operator-665b6dd947-bz4bz\" (UID: \"0131c573-bf76-49f4-9581-dd39ef60b27f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036648 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036668 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-config\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036695 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c7096e1-8ca1-483d-8e12-1cc79d28182a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036716 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw8ff\" (UniqueName: \"kubernetes.io/projected/33978535-84b2-4def-af5a-d2819171e202-kube-api-access-hw8ff\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036735 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c7096e1-8ca1-483d-8e12-1cc79d28182a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036755 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036789 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036810 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fttb4\" (UniqueName: \"kubernetes.io/projected/116ae5bc-cf7e-45ad-9800-501bcfc04ff7-kube-api-access-fttb4\") pod \"downloads-7954f5f757-wlj8d\" (UID: \"116ae5bc-cf7e-45ad-9800-501bcfc04ff7\") " pod="openshift-console/downloads-7954f5f757-wlj8d" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036828 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33978535-84b2-4def-af5a-d2819171e202-audit-dir\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036846 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036864 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-oauth-config\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036882 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-client-ca\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036905 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-oauth-serving-cert\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036942 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.036984 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-serving-cert\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037005 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037035 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c0b903-63ed-4811-a991-9a5751a4c640-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037054 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-client-ca\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037075 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-trusted-ca-bundle\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037101 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-trusted-ca\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037127 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-service-ca\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037150 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-audit-policies\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037172 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nx4t\" (UniqueName: \"kubernetes.io/projected/8227d3a9-60f5-4d19-b4d1-8a0143864837-kube-api-access-6nx4t\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037195 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0131c573-bf76-49f4-9581-dd39ef60b27f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bz4bz\" (UID: \"0131c573-bf76-49f4-9581-dd39ef60b27f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037214 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8srf\" (UniqueName: \"kubernetes.io/projected/a7649915-6408-4c30-8faa-0fb3ea55007a-kube-api-access-v8srf\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037237 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-config\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037263 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037280 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c0b903-63ed-4811-a991-9a5751a4c640-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037297 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037346 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037374 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7649915-6408-4c30-8faa-0fb3ea55007a-serving-cert\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037394 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-serving-cert\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037480 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-config\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037499 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwn6m\" (UniqueName: \"kubernetes.io/projected/9c7096e1-8ca1-483d-8e12-1cc79d28182a-kube-api-access-jwn6m\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037525 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwlfb\" (UniqueName: \"kubernetes.io/projected/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-kube-api-access-pwlfb\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037550 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037567 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d3a9-60f5-4d19-b4d1-8a0143864837-serving-cert\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037626 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lnfm\" (UniqueName: \"kubernetes.io/projected/e489a46b-9123-44c6-94e0-692621760dd6-kube-api-access-6lnfm\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037645 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c7096e1-8ca1-483d-8e12-1cc79d28182a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037669 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-console-config\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037687 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5tzz\" (UniqueName: \"kubernetes.io/projected/c8c0b903-63ed-4811-a991-9a5751a4c640-kube-api-access-k5tzz\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.038919 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-config\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.038934 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-audit-policies\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.041231 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-client-ca\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.041781 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-client-ca\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.042506 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.042729 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-trusted-ca-bundle\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.042694 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.042777 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33978535-84b2-4def-af5a-d2819171e202-audit-dir\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.043385 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.044195 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-trusted-ca\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.044414 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-service-ca\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.044653 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.045133 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.045241 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.045949 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-oauth-serving-cert\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.047358 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-serving-cert\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.037433 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048480 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"598dc183dd2b9e8a46b146f48602e9a7534af890e299ed52ca5218c75e2d22bb"} Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048521 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048540 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hdg74"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048550 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048561 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j6dgq"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048584 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048316 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-console-config\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.047476 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c7096e1-8ca1-483d-8e12-1cc79d28182a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.047530 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.047610 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.047804 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.047948 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.048821 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c0b903-63ed-4811-a991-9a5751a4c640-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.049196 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-oauth-config\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.049233 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.049457 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.049732 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d3a9-60f5-4d19-b4d1-8a0143864837-serving-cert\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.049765 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-config\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.049868 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-serving-cert\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.049869 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c0b903-63ed-4811-a991-9a5751a4c640-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.050514 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4x6s2"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.050997 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-config\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.051500 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z82w8"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.052273 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0131c573-bf76-49f4-9581-dd39ef60b27f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bz4bz\" (UID: \"0131c573-bf76-49f4-9581-dd39ef60b27f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.052465 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.052765 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7649915-6408-4c30-8faa-0fb3ea55007a-serving-cert\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.053459 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.053974 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"82e541556e6ce0442b09137b0858a03054cd7e7a18942157809b43a8880c3d02"} Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.054417 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wlj8d"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.054726 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9c7096e1-8ca1-483d-8e12-1cc79d28182a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.056507 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbr84"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.057339 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.057489 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.057794 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.058625 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.059405 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6cdd5cb18e1bdebffd9820b4e73b86bc68c6546abca2d803fe6bf1f7fb6af638"} Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.059618 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.061283 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.061537 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.062610 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.064183 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.065066 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fmfh5"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.066125 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.067632 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-x2jlg"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.068698 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bqslk"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.068828 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.069891 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.072076 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.072106 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.074371 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x2jlg"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.077429 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.079470 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dxj7b"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.080530 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-z4qfh"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.080791 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.080988 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.081620 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dxj7b"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.082656 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z4qfh"] Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.098354 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.117817 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.138667 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.169967 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.178510 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.197929 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.217498 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.244247 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.262374 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.278130 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.298348 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.318484 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.338035 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.359333 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.377683 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.398456 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.417511 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.437637 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.457436 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.478314 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.498866 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.519282 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.569752 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldfqj\" (UniqueName: \"kubernetes.io/projected/10596b8a-e57a-498e-a7e8-e017fde34d54-kube-api-access-ldfqj\") pod \"openshift-apiserver-operator-796bbdcf4f-cg82l\" (UID: \"10596b8a-e57a-498e-a7e8-e017fde34d54\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.588407 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbmd2\" (UniqueName: \"kubernetes.io/projected/656b06bf-9660-4c18-941b-5e5589f0301a-kube-api-access-vbmd2\") pod \"machine-api-operator-5694c8668f-srhjb\" (UID: \"656b06bf-9660-4c18-941b-5e5589f0301a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.599543 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.600830 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b5xt\" (UniqueName: \"kubernetes.io/projected/681a57d4-bd74-4910-a3f3-517b96a15123-kube-api-access-9b5xt\") pod \"apiserver-7bbb656c7d-k48nr\" (UID: \"681a57d4-bd74-4910-a3f3-517b96a15123\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.617517 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.658514 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxs6p\" (UniqueName: \"kubernetes.io/projected/d0ee93f1-93ac-4db2-b35e-5be5bded6541-kube-api-access-wxs6p\") pod \"apiserver-76f77b778f-7jp8q\" (UID: \"d0ee93f1-93ac-4db2-b35e-5be5bded6541\") " pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.679078 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8fth\" (UniqueName: \"kubernetes.io/projected/5b5592d9-5fbf-49ac-bab6-bf0e11f43706-kube-api-access-s8fth\") pod \"authentication-operator-69f744f599-4x6s2\" (UID: \"5b5592d9-5fbf-49ac-bab6-bf0e11f43706\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.681398 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.695325 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xcvb\" (UniqueName: \"kubernetes.io/projected/b9a99858-5ada-47b7-855c-8d3b43ab9fee-kube-api-access-7xcvb\") pod \"machine-approver-56656f9798-jlwrb\" (UID: \"b9a99858-5ada-47b7-855c-8d3b43ab9fee\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.698738 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.718250 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.739905 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.743932 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.758919 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.780775 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.780861 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.828304 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.828552 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.831457 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.834380 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.837685 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.859521 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.879658 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.892554 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.898330 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.917682 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.938073 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.956056 4808 request.go:700] Waited for 1.004957451s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0 Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.957939 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.978498 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 15:56:26 crc kubenswrapper[4808]: I0217 15:56:26.990714 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-srhjb"] Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:26.998414 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 15:56:27 crc kubenswrapper[4808]: W0217 15:56:27.015313 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod656b06bf_9660_4c18_941b_5e5589f0301a.slice/crio-5e563afa4930fb66b948ef11f25d64ff546003f7fa1ce0c3b63acce7c9033251 WatchSource:0}: Error finding container 5e563afa4930fb66b948ef11f25d64ff546003f7fa1ce0c3b63acce7c9033251: Status 404 returned error can't find the container with id 5e563afa4930fb66b948ef11f25d64ff546003f7fa1ce0c3b63acce7c9033251 Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.020000 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.024745 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7jp8q"] Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.027257 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l"] Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.039428 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: W0217 15:56:27.051862 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0ee93f1_93ac_4db2_b35e_5be5bded6541.slice/crio-c9bef38d109ca11009a6f0cc93174fd1f33bc4520f641fbed7f054d6037ee959 WatchSource:0}: Error finding container c9bef38d109ca11009a6f0cc93174fd1f33bc4520f641fbed7f054d6037ee959: Status 404 returned error can't find the container with id c9bef38d109ca11009a6f0cc93174fd1f33bc4520f641fbed7f054d6037ee959 Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.057139 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.085849 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.089453 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" event={"ID":"656b06bf-9660-4c18-941b-5e5589f0301a","Type":"ContainerStarted","Data":"5e563afa4930fb66b948ef11f25d64ff546003f7fa1ce0c3b63acce7c9033251"} Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.092892 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9b53404cf9f369504e347bb0f59ad736ebc746180be4233f4ce52cde59acdbb6"} Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.093454 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.098807 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.101763 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr"] Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.102093 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f4548123c62df5178f29eacbe19cd33a5d6082a8ea61dd747d0fff4c6c2a9ee4"} Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.112042 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" event={"ID":"b9a99858-5ada-47b7-855c-8d3b43ab9fee","Type":"ContainerStarted","Data":"9ec72c46f7cf7687f5d5ecfe6b876370e2c5440f0f9428a29b45160d0a3d1ed1"} Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.113938 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" event={"ID":"10596b8a-e57a-498e-a7e8-e017fde34d54","Type":"ContainerStarted","Data":"f7e0bc1dfc7dffda94fa4f82a03a79bbb9edf48aa7e048c81228c0ad50aed0e8"} Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.117058 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ba4f7b2e5f7e52f93605f2507c380d0b72e9d8edee07184f123f56d7662913f5"} Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.117946 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.120567 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" event={"ID":"d0ee93f1-93ac-4db2-b35e-5be5bded6541","Type":"ContainerStarted","Data":"c9bef38d109ca11009a6f0cc93174fd1f33bc4520f641fbed7f054d6037ee959"} Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.131270 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-4x6s2"] Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.141158 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.159543 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.178127 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.199921 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.218619 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.247241 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.257313 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.278413 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.297536 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.318014 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.343199 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.358132 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.377753 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.397631 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455298 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98bde021-9860-4b02-9223-512db6787eff-serving-cert\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455386 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/98bde021-9860-4b02-9223-512db6787eff-available-featuregates\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455437 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-trusted-ca\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455463 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-bound-sa-token\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455508 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql2z2\" (UniqueName: \"kubernetes.io/projected/98bde021-9860-4b02-9223-512db6787eff-kube-api-access-ql2z2\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455528 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-certificates\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455563 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l78nd\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-kube-api-access-l78nd\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455612 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3801d-3513-460c-a719-ed9dc92697e7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455653 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-tls\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455687 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.455783 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ddc3801d-3513-460c-a719-ed9dc92697e7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.456537 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:27.956507984 +0000 UTC m=+151.472867237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.460918 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.481206 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.497790 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.518937 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.539038 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.556591 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.556799 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.056732474 +0000 UTC m=+151.573091587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.556978 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql2z2\" (UniqueName: \"kubernetes.io/projected/98bde021-9860-4b02-9223-512db6787eff-kube-api-access-ql2z2\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557067 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-plugins-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557122 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-certificates\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557175 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557263 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfvt4\" (UniqueName: \"kubernetes.io/projected/e20a6284-be62-4671-b75f-38b32dc20813-kube-api-access-vfvt4\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557327 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l78nd\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-kube-api-access-l78nd\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557386 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3267bf97-7e39-410a-8502-3737bfb7f963-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557438 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7baa3ebb-6bb0-4744-b096-971958bcd263-config-volume\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557496 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-etcd-ca\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557558 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3801d-3513-460c-a719-ed9dc92697e7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557696 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fddf9ec8-447f-487c-a863-73ec68b90737-node-bootstrap-token\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557820 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbwnc\" (UniqueName: \"kubernetes.io/projected/94f0bc0d-40c0-45b7-b6c4-7b285ba26c52-kube-api-access-bbwnc\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8ws2\" (UID: \"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557880 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmv2c\" (UniqueName: \"kubernetes.io/projected/7baa3ebb-6bb0-4744-b096-971958bcd263-kube-api-access-gmv2c\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557940 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlrx9\" (UniqueName: \"kubernetes.io/projected/0b9e5453-e92d-46cd-b8fb-c989f00809ae-kube-api-access-rlrx9\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.557989 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3267bf97-7e39-410a-8502-3737bfb7f963-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558039 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df94p\" (UniqueName: \"kubernetes.io/projected/3ba06ea2-9714-49b5-8477-8eb056bb45a4-kube-api-access-df94p\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558086 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wns2k\" (UniqueName: \"kubernetes.io/projected/b7697c8e-8996-44b9-8b66-965584ab26e2-kube-api-access-wns2k\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558154 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx9v6\" (UniqueName: \"kubernetes.io/projected/71acbaae-e241-4c8e-ac2b-6dd40b15b494-kube-api-access-lx9v6\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558235 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-tls\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558281 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx5pw\" (UniqueName: \"kubernetes.io/projected/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-kube-api-access-hx5pw\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558365 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558409 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558453 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-csi-data-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558502 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-default-certificate\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558616 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558671 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ddc3801d-3513-460c-a719-ed9dc92697e7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558762 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9e5453-e92d-46cd-b8fb-c989f00809ae-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558848 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-stats-auth\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558902 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jcp4\" (UniqueName: \"kubernetes.io/projected/14c6770e-9659-4e77-a7f1-f3ef06ec332d-kube-api-access-5jcp4\") pod \"package-server-manager-789f6589d5-spzc7\" (UID: \"14c6770e-9659-4e77-a7f1-f3ef06ec332d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.558923 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-certificates\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.559226 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7baa3ebb-6bb0-4744-b096-971958bcd263-secret-volume\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.559261 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7697c8e-8996-44b9-8b66-965584ab26e2-webhook-cert\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.559285 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-srv-cert\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.559318 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98bde021-9860-4b02-9223-512db6787eff-serving-cert\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.559343 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26fa95d4-8240-472a-a86f-98acf35ade67-images\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.559372 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.059349424 +0000 UTC m=+151.575708677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560047 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b736927-813a-4b21-80d6-a0b4106e2c95-metrics-tls\") pod \"dns-operator-744455d44c-p8js4\" (UID: \"4b736927-813a-4b21-80d6-a0b4106e2c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560097 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560277 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9e5453-e92d-46cd-b8fb-c989f00809ae-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560325 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-config\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560356 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3267bf97-7e39-410a-8502-3737bfb7f963-config\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560385 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4x8\" (UniqueName: \"kubernetes.io/projected/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-kube-api-access-ct4x8\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560465 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr4p7\" (UniqueName: \"kubernetes.io/projected/4b736927-813a-4b21-80d6-a0b4106e2c95-kube-api-access-fr4p7\") pod \"dns-operator-744455d44c-p8js4\" (UID: \"4b736927-813a-4b21-80d6-a0b4106e2c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560493 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-serving-cert\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560531 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/98bde021-9860-4b02-9223-512db6787eff-available-featuregates\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560622 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdhmj\" (UniqueName: \"kubernetes.io/projected/b0793347-d948-480b-b5a7-d0fed7e12b38-kube-api-access-cdhmj\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560664 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71acbaae-e241-4c8e-ac2b-6dd40b15b494-proxy-tls\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560696 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-bound-sa-token\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560718 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e20a6284-be62-4671-b75f-38b32dc20813-serving-cert\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.560739 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.561265 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/98bde021-9860-4b02-9223-512db6787eff-available-featuregates\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562101 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ddc3801d-3513-460c-a719-ed9dc92697e7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562475 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-etcd-service-ca\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562527 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/092b0577-f19f-413d-afc5-bdc3a40f7f75-trusted-ca\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562623 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562736 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4sbh\" (UniqueName: \"kubernetes.io/projected/b26b861c-ec52-4685-846c-ea022517e9fb-kube-api-access-t4sbh\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562770 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssq98\" (UniqueName: \"kubernetes.io/projected/9bca2625-c55d-4a28-b37d-2ac43d181e26-kube-api-access-ssq98\") pod \"ingress-canary-z4qfh\" (UID: \"9bca2625-c55d-4a28-b37d-2ac43d181e26\") " pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562823 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-config\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562863 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/092b0577-f19f-413d-afc5-bdc3a40f7f75-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.562988 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683fb061-dc67-431d-8a8a-d5a383794fef-config-volume\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.563111 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/092b0577-f19f-413d-afc5-bdc3a40f7f75-metrics-tls\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.563242 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbxgq\" (UniqueName: \"kubernetes.io/projected/26fa95d4-8240-472a-a86f-98acf35ade67-kube-api-access-mbxgq\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.563896 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e20a6284-be62-4671-b75f-38b32dc20813-etcd-client\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.563940 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-profile-collector-cert\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564007 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b26b861c-ec52-4685-846c-ea022517e9fb-service-ca-bundle\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564055 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-socket-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564086 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrwgm\" (UniqueName: \"kubernetes.io/projected/69e8c398-683b-47dc-a517-633d625cbd97-kube-api-access-zrwgm\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564158 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rljkk\" (UniqueName: \"kubernetes.io/projected/fddf9ec8-447f-487c-a863-73ec68b90737-kube-api-access-rljkk\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564182 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564212 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz8cc\" (UniqueName: \"kubernetes.io/projected/728793ed-1e89-455c-8d45-92c4ab08c1f6-kube-api-access-hz8cc\") pod \"multus-admission-controller-857f4d67dd-z82w8\" (UID: \"728793ed-1e89-455c-8d45-92c4ab08c1f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564257 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/94f0bc0d-40c0-45b7-b6c4-7b285ba26c52-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8ws2\" (UID: \"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564439 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-metrics-certs\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564484 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fddf9ec8-447f-487c-a863-73ec68b90737-certs\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564703 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26fa95d4-8240-472a-a86f-98acf35ade67-proxy-tls\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.564778 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3ba06ea2-9714-49b5-8477-8eb056bb45a4-signing-key\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.566191 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.566361 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26fa95d4-8240-472a-a86f-98acf35ade67-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.566598 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3801d-3513-460c-a719-ed9dc92697e7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.566701 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/14c6770e-9659-4e77-a7f1-f3ef06ec332d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-spzc7\" (UID: \"14c6770e-9659-4e77-a7f1-f3ef06ec332d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.566748 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/683fb061-dc67-431d-8a8a-d5a383794fef-metrics-tls\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.566854 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhkfd\" (UniqueName: \"kubernetes.io/projected/683fb061-dc67-431d-8a8a-d5a383794fef-kube-api-access-rhkfd\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.566947 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8thp\" (UniqueName: \"kubernetes.io/projected/4f9ab75e-8898-4a0c-8630-c657450b648e-kube-api-access-s8thp\") pod \"migrator-59844c95c7-n5p8z\" (UID: \"4f9ab75e-8898-4a0c-8630-c657450b648e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.567045 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/728793ed-1e89-455c-8d45-92c4ab08c1f6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z82w8\" (UID: \"728793ed-1e89-455c-8d45-92c4ab08c1f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.567132 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98bde021-9860-4b02-9223-512db6787eff-serving-cert\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.567156 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.569742 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-tls\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.569871 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71acbaae-e241-4c8e-ac2b-6dd40b15b494-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.569908 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9c2k\" (UniqueName: \"kubernetes.io/projected/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-kube-api-access-f9c2k\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.570144 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-config\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.570238 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7697c8e-8996-44b9-8b66-965584ab26e2-apiservice-cert\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.570265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-mountpoint-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.570284 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3ba06ea2-9714-49b5-8477-8eb056bb45a4-signing-cabundle\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.570349 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bca2625-c55d-4a28-b37d-2ac43d181e26-cert\") pod \"ingress-canary-z4qfh\" (UID: \"9bca2625-c55d-4a28-b37d-2ac43d181e26\") " pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.570811 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rng2l\" (UniqueName: \"kubernetes.io/projected/092b0577-f19f-413d-afc5-bdc3a40f7f75-kube-api-access-rng2l\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.570894 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b7697c8e-8996-44b9-8b66-965584ab26e2-tmpfs\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.571364 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-trusted-ca\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.571552 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-registration-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.571734 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-srv-cert\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.581445 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-trusted-ca\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.583221 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.599436 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.619866 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.639365 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.659001 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.674846 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675098 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683fb061-dc67-431d-8a8a-d5a383794fef-config-volume\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675145 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/092b0577-f19f-413d-afc5-bdc3a40f7f75-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.675245 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.175200049 +0000 UTC m=+151.691559152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675374 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/092b0577-f19f-413d-afc5-bdc3a40f7f75-metrics-tls\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675469 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbxgq\" (UniqueName: \"kubernetes.io/projected/26fa95d4-8240-472a-a86f-98acf35ade67-kube-api-access-mbxgq\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675526 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e20a6284-be62-4671-b75f-38b32dc20813-etcd-client\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675549 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-profile-collector-cert\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675641 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b26b861c-ec52-4685-846c-ea022517e9fb-service-ca-bundle\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675663 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrwgm\" (UniqueName: \"kubernetes.io/projected/69e8c398-683b-47dc-a517-633d625cbd97-kube-api-access-zrwgm\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675685 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rljkk\" (UniqueName: \"kubernetes.io/projected/fddf9ec8-447f-487c-a863-73ec68b90737-kube-api-access-rljkk\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675729 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675749 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-socket-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675766 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz8cc\" (UniqueName: \"kubernetes.io/projected/728793ed-1e89-455c-8d45-92c4ab08c1f6-kube-api-access-hz8cc\") pod \"multus-admission-controller-857f4d67dd-z82w8\" (UID: \"728793ed-1e89-455c-8d45-92c4ab08c1f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675805 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/94f0bc0d-40c0-45b7-b6c4-7b285ba26c52-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8ws2\" (UID: \"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675832 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fddf9ec8-447f-487c-a863-73ec68b90737-certs\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675877 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-metrics-certs\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675898 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26fa95d4-8240-472a-a86f-98acf35ade67-proxy-tls\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675918 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3ba06ea2-9714-49b5-8477-8eb056bb45a4-signing-key\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675956 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26fa95d4-8240-472a-a86f-98acf35ade67-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.675980 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/14c6770e-9659-4e77-a7f1-f3ef06ec332d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-spzc7\" (UID: \"14c6770e-9659-4e77-a7f1-f3ef06ec332d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676001 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/683fb061-dc67-431d-8a8a-d5a383794fef-metrics-tls\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676038 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8thp\" (UniqueName: \"kubernetes.io/projected/4f9ab75e-8898-4a0c-8630-c657450b648e-kube-api-access-s8thp\") pod \"migrator-59844c95c7-n5p8z\" (UID: \"4f9ab75e-8898-4a0c-8630-c657450b648e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676057 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/728793ed-1e89-455c-8d45-92c4ab08c1f6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z82w8\" (UID: \"728793ed-1e89-455c-8d45-92c4ab08c1f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676076 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhkfd\" (UniqueName: \"kubernetes.io/projected/683fb061-dc67-431d-8a8a-d5a383794fef-kube-api-access-rhkfd\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676118 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676139 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71acbaae-e241-4c8e-ac2b-6dd40b15b494-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676157 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9c2k\" (UniqueName: \"kubernetes.io/projected/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-kube-api-access-f9c2k\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676194 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-config\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676214 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7697c8e-8996-44b9-8b66-965584ab26e2-apiservice-cert\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676230 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-mountpoint-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676268 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bca2625-c55d-4a28-b37d-2ac43d181e26-cert\") pod \"ingress-canary-z4qfh\" (UID: \"9bca2625-c55d-4a28-b37d-2ac43d181e26\") " pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676292 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3ba06ea2-9714-49b5-8477-8eb056bb45a4-signing-cabundle\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676316 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rng2l\" (UniqueName: \"kubernetes.io/projected/092b0577-f19f-413d-afc5-bdc3a40f7f75-kube-api-access-rng2l\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676391 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b7697c8e-8996-44b9-8b66-965584ab26e2-tmpfs\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676398 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-socket-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676426 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-srv-cert\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676611 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-registration-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676794 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-plugins-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676879 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.676963 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfvt4\" (UniqueName: \"kubernetes.io/projected/e20a6284-be62-4671-b75f-38b32dc20813-kube-api-access-vfvt4\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677050 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3267bf97-7e39-410a-8502-3737bfb7f963-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677111 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7baa3ebb-6bb0-4744-b096-971958bcd263-config-volume\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677164 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-etcd-ca\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677220 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fddf9ec8-447f-487c-a863-73ec68b90737-node-bootstrap-token\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677281 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmv2c\" (UniqueName: \"kubernetes.io/projected/7baa3ebb-6bb0-4744-b096-971958bcd263-kube-api-access-gmv2c\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677334 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlrx9\" (UniqueName: \"kubernetes.io/projected/0b9e5453-e92d-46cd-b8fb-c989f00809ae-kube-api-access-rlrx9\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677389 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3267bf97-7e39-410a-8502-3737bfb7f963-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677441 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbwnc\" (UniqueName: \"kubernetes.io/projected/94f0bc0d-40c0-45b7-b6c4-7b285ba26c52-kube-api-access-bbwnc\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8ws2\" (UID: \"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677495 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wns2k\" (UniqueName: \"kubernetes.io/projected/b7697c8e-8996-44b9-8b66-965584ab26e2-kube-api-access-wns2k\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677555 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df94p\" (UniqueName: \"kubernetes.io/projected/3ba06ea2-9714-49b5-8477-8eb056bb45a4-kube-api-access-df94p\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677649 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-registration-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677653 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx9v6\" (UniqueName: \"kubernetes.io/projected/71acbaae-e241-4c8e-ac2b-6dd40b15b494-kube-api-access-lx9v6\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677789 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx5pw\" (UniqueName: \"kubernetes.io/projected/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-kube-api-access-hx5pw\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677848 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677886 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677920 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-csi-data-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677921 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b26b861c-ec52-4685-846c-ea022517e9fb-service-ca-bundle\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677945 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-plugins-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.677953 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-default-certificate\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678042 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678098 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-stats-auth\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678129 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jcp4\" (UniqueName: \"kubernetes.io/projected/14c6770e-9659-4e77-a7f1-f3ef06ec332d-kube-api-access-5jcp4\") pod \"package-server-manager-789f6589d5-spzc7\" (UID: \"14c6770e-9659-4e77-a7f1-f3ef06ec332d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678150 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9e5453-e92d-46cd-b8fb-c989f00809ae-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678612 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7baa3ebb-6bb0-4744-b096-971958bcd263-secret-volume\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678632 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7697c8e-8996-44b9-8b66-965584ab26e2-webhook-cert\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678672 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-srv-cert\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678690 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26fa95d4-8240-472a-a86f-98acf35ade67-images\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678742 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b736927-813a-4b21-80d6-a0b4106e2c95-metrics-tls\") pod \"dns-operator-744455d44c-p8js4\" (UID: \"4b736927-813a-4b21-80d6-a0b4106e2c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678779 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9e5453-e92d-46cd-b8fb-c989f00809ae-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678832 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-config\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678856 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678898 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct4x8\" (UniqueName: \"kubernetes.io/projected/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-kube-api-access-ct4x8\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678929 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3267bf97-7e39-410a-8502-3737bfb7f963-config\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678952 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr4p7\" (UniqueName: \"kubernetes.io/projected/4b736927-813a-4b21-80d6-a0b4106e2c95-kube-api-access-fr4p7\") pod \"dns-operator-744455d44c-p8js4\" (UID: \"4b736927-813a-4b21-80d6-a0b4106e2c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.678996 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-serving-cert\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679022 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdhmj\" (UniqueName: \"kubernetes.io/projected/b0793347-d948-480b-b5a7-d0fed7e12b38-kube-api-access-cdhmj\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679077 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e20a6284-be62-4671-b75f-38b32dc20813-serving-cert\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679097 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679139 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71acbaae-e241-4c8e-ac2b-6dd40b15b494-proxy-tls\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679159 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/092b0577-f19f-413d-afc5-bdc3a40f7f75-trusted-ca\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679178 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679217 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-etcd-service-ca\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679238 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4sbh\" (UniqueName: \"kubernetes.io/projected/b26b861c-ec52-4685-846c-ea022517e9fb-kube-api-access-t4sbh\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679256 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssq98\" (UniqueName: \"kubernetes.io/projected/9bca2625-c55d-4a28-b37d-2ac43d181e26-kube-api-access-ssq98\") pod \"ingress-canary-z4qfh\" (UID: \"9bca2625-c55d-4a28-b37d-2ac43d181e26\") " pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679272 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-config\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679856 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7baa3ebb-6bb0-4744-b096-971958bcd263-config-volume\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.679903 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-etcd-ca\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.680459 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.680668 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71acbaae-e241-4c8e-ac2b-6dd40b15b494-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.684102 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/26fa95d4-8240-472a-a86f-98acf35ade67-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.684561 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-config\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.685179 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26fa95d4-8240-472a-a86f-98acf35ade67-images\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.685385 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e20a6284-be62-4671-b75f-38b32dc20813-etcd-client\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.685513 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/14c6770e-9659-4e77-a7f1-f3ef06ec332d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-spzc7\" (UID: \"14c6770e-9659-4e77-a7f1-f3ef06ec332d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.686125 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3267bf97-7e39-410a-8502-3737bfb7f963-config\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.687150 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-csi-data-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.687416 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b736927-813a-4b21-80d6-a0b4106e2c95-metrics-tls\") pod \"dns-operator-744455d44c-p8js4\" (UID: \"4b736927-813a-4b21-80d6-a0b4106e2c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.688412 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b9e5453-e92d-46cd-b8fb-c989f00809ae-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.689021 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.188997822 +0000 UTC m=+151.705356915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.690047 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/69e8c398-683b-47dc-a517-633d625cbd97-mountpoint-dir\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.691002 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-etcd-service-ca\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.691777 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.692206 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b9e5453-e92d-46cd-b8fb-c989f00809ae-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.692565 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b7697c8e-8996-44b9-8b66-965584ab26e2-tmpfs\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.692691 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.692766 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71acbaae-e241-4c8e-ac2b-6dd40b15b494-proxy-tls\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.692804 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/092b0577-f19f-413d-afc5-bdc3a40f7f75-trusted-ca\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.692880 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e20a6284-be62-4671-b75f-38b32dc20813-serving-cert\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.693204 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7baa3ebb-6bb0-4744-b096-971958bcd263-secret-volume\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.693258 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-serving-cert\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.693261 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.693818 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e20a6284-be62-4671-b75f-38b32dc20813-config\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.694011 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3ba06ea2-9714-49b5-8477-8eb056bb45a4-signing-key\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.694212 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.694640 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-profile-collector-cert\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.694270 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.694385 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3ba06ea2-9714-49b5-8477-8eb056bb45a4-signing-cabundle\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.697170 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-srv-cert\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.697331 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/092b0577-f19f-413d-afc5-bdc3a40f7f75-metrics-tls\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.698608 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/728793ed-1e89-455c-8d45-92c4ab08c1f6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z82w8\" (UID: \"728793ed-1e89-455c-8d45-92c4ab08c1f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.699225 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/26fa95d4-8240-472a-a86f-98acf35ade67-proxy-tls\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.699427 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-default-certificate\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.699618 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-metrics-certs\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.699633 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b26b861c-ec52-4685-846c-ea022517e9fb-stats-auth\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.699828 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-srv-cert\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.700238 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.700772 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.701450 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/94f0bc0d-40c0-45b7-b6c4-7b285ba26c52-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8ws2\" (UID: \"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.702040 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3267bf97-7e39-410a-8502-3737bfb7f963-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.712454 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-config\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.720228 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.736382 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fddf9ec8-447f-487c-a863-73ec68b90737-node-bootstrap-token\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.738559 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.758664 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.770471 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fddf9ec8-447f-487c-a863-73ec68b90737-certs\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.778636 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.780927 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.781758 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.28172792 +0000 UTC m=+151.798086993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.793058 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7697c8e-8996-44b9-8b66-965584ab26e2-apiservice-cert\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.793229 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7697c8e-8996-44b9-8b66-965584ab26e2-webhook-cert\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.825302 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5tzz\" (UniqueName: \"kubernetes.io/projected/c8c0b903-63ed-4811-a991-9a5751a4c640-kube-api-access-k5tzz\") pod \"openshift-controller-manager-operator-756b6f6bc6-cbwrs\" (UID: \"c8c0b903-63ed-4811-a991-9a5751a4c640\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.847073 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnnfd\" (UniqueName: \"kubernetes.io/projected/0131c573-bf76-49f4-9581-dd39ef60b27f-kube-api-access-pnnfd\") pod \"cluster-samples-operator-665b6dd947-bz4bz\" (UID: \"0131c573-bf76-49f4-9581-dd39ef60b27f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.866759 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw8ff\" (UniqueName: \"kubernetes.io/projected/33978535-84b2-4def-af5a-d2819171e202-kube-api-access-hw8ff\") pod \"oauth-openshift-558db77b4-j6dgq\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.878697 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c7096e1-8ca1-483d-8e12-1cc79d28182a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.882876 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.883298 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.383278986 +0000 UTC m=+151.899638069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.902978 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.909515 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fttb4\" (UniqueName: \"kubernetes.io/projected/116ae5bc-cf7e-45ad-9800-501bcfc04ff7-kube-api-access-fttb4\") pod \"downloads-7954f5f757-wlj8d\" (UID: \"116ae5bc-cf7e-45ad-9800-501bcfc04ff7\") " pod="openshift-console/downloads-7954f5f757-wlj8d" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.911130 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wlj8d" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.919866 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwn6m\" (UniqueName: \"kubernetes.io/projected/9c7096e1-8ca1-483d-8e12-1cc79d28182a-kube-api-access-jwn6m\") pod \"cluster-image-registry-operator-dc59b4c8b-9l858\" (UID: \"9c7096e1-8ca1-483d-8e12-1cc79d28182a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.926064 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.934125 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.942676 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwlfb\" (UniqueName: \"kubernetes.io/projected/25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa-kube-api-access-pwlfb\") pod \"console-operator-58897d9998-mxgf8\" (UID: \"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa\") " pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.960745 4808 request.go:700] Waited for 1.912350383s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.973935 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lnfm\" (UniqueName: \"kubernetes.io/projected/e489a46b-9123-44c6-94e0-692621760dd6-kube-api-access-6lnfm\") pod \"console-f9d7485db-hdg74\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.985412 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:27 crc kubenswrapper[4808]: E0217 15:56:27.986545 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.486494868 +0000 UTC m=+152.002853971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.986606 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nx4t\" (UniqueName: \"kubernetes.io/projected/8227d3a9-60f5-4d19-b4d1-8a0143864837-kube-api-access-6nx4t\") pod \"route-controller-manager-6576b87f9c-j6vm5\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:27 crc kubenswrapper[4808]: I0217 15:56:27.995653 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8srf\" (UniqueName: \"kubernetes.io/projected/a7649915-6408-4c30-8faa-0fb3ea55007a-kube-api-access-v8srf\") pod \"controller-manager-879f6c89f-cvqck\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.000504 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.012560 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/683fb061-dc67-431d-8a8a-d5a383794fef-metrics-tls\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.053838 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.053881 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.056708 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683fb061-dc67-431d-8a8a-d5a383794fef-config-volume\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.060231 4808 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.078545 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.090801 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.091279 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.591256161 +0000 UTC m=+152.107615244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.098333 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.111852 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.118208 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.125428 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.136169 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bca2625-c55d-4a28-b37d-2ac43d181e26-cert\") pod \"ingress-canary-z4qfh\" (UID: \"9bca2625-c55d-4a28-b37d-2ac43d181e26\") " pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.137825 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.147143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" event={"ID":"d0ee93f1-93ac-4db2-b35e-5be5bded6541","Type":"ContainerDied","Data":"c19decad51c1b69b1826c2c8e0925aa45a5bc773d28bc99648af07b790b65c35"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.151130 4808 generic.go:334] "Generic (PLEG): container finished" podID="d0ee93f1-93ac-4db2-b35e-5be5bded6541" containerID="c19decad51c1b69b1826c2c8e0925aa45a5bc773d28bc99648af07b790b65c35" exitCode=0 Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.157397 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.167550 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.175711 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" event={"ID":"656b06bf-9660-4c18-941b-5e5589f0301a","Type":"ContainerStarted","Data":"b1fb9b0bb3c50dd0d5e089cc840c6da5f34844e0c492b88ce6fec93b6bb3dd8b"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.175782 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" event={"ID":"656b06bf-9660-4c18-941b-5e5589f0301a","Type":"ContainerStarted","Data":"c84eddacbd701e2f4be21f89f0238d216b00bf47018ffe21f01b7c624a5bc7c9"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.177684 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.178844 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.184704 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" event={"ID":"b9a99858-5ada-47b7-855c-8d3b43ab9fee","Type":"ContainerStarted","Data":"a8946f8ba57d15ff903547b5d3afb23f3b322a750291b72b3b9220f37b8f5053"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.184756 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" event={"ID":"b9a99858-5ada-47b7-855c-8d3b43ab9fee","Type":"ContainerStarted","Data":"cd04ae8543fbcb61e49789b7da0eacde06d915984a63355b376dce6b0abe2238"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.186772 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" event={"ID":"10596b8a-e57a-498e-a7e8-e017fde34d54","Type":"ContainerStarted","Data":"9b3cb0231c5f52b5ef2da876239e96adbe6e098823b4e3ca75f4c06c927f4847"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.191618 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.193384 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.693354663 +0000 UTC m=+152.209713736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.193888 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" event={"ID":"5b5592d9-5fbf-49ac-bab6-bf0e11f43706","Type":"ContainerStarted","Data":"5ac52f8586bdc10b8663aa8a239c5aaec2728794ed514de1896d634d8f2ce1fc"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.193941 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" event={"ID":"5b5592d9-5fbf-49ac-bab6-bf0e11f43706","Type":"ContainerStarted","Data":"900c59c2d581818a176801999b6fa9e6b878076d4f9af2ecbee4785471fad41f"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.194274 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.213788 4808 generic.go:334] "Generic (PLEG): container finished" podID="681a57d4-bd74-4910-a3f3-517b96a15123" containerID="642d65938791a8bb9629f1359ff2bf1885cdcece436e6ab4ec5878dfedf1c7f7" exitCode=0 Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.215300 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" event={"ID":"681a57d4-bd74-4910-a3f3-517b96a15123","Type":"ContainerDied","Data":"642d65938791a8bb9629f1359ff2bf1885cdcece436e6ab4ec5878dfedf1c7f7"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.215790 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" event={"ID":"681a57d4-bd74-4910-a3f3-517b96a15123","Type":"ContainerStarted","Data":"3036eb853088e7295948e66ca9264222c463f1d60ce8d0011f48a145e6120ab6"} Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.218463 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.242782 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql2z2\" (UniqueName: \"kubernetes.io/projected/98bde021-9860-4b02-9223-512db6787eff-kube-api-access-ql2z2\") pod \"openshift-config-operator-7777fb866f-s2fz5\" (UID: \"98bde021-9860-4b02-9223-512db6787eff\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.262291 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l78nd\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-kube-api-access-l78nd\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.298495 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-bound-sa-token\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.299339 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.304244 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.804215861 +0000 UTC m=+152.320574934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.314999 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/092b0577-f19f-413d-afc5-bdc3a40f7f75-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.328715 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbxgq\" (UniqueName: \"kubernetes.io/projected/26fa95d4-8240-472a-a86f-98acf35ade67-kube-api-access-mbxgq\") pod \"machine-config-operator-74547568cd-cw29n\" (UID: \"26fa95d4-8240-472a-a86f-98acf35ade67\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.334452 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8b124f4-97ab-4512-a1a2-b93bc4e724e8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-lzvjs\" (UID: \"b8b124f4-97ab-4512-a1a2-b93bc4e724e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.355109 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rljkk\" (UniqueName: \"kubernetes.io/projected/fddf9ec8-447f-487c-a863-73ec68b90737-kube-api-access-rljkk\") pod \"machine-config-server-dgt46\" (UID: \"fddf9ec8-447f-487c-a863-73ec68b90737\") " pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.380809 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrwgm\" (UniqueName: \"kubernetes.io/projected/69e8c398-683b-47dc-a517-633d625cbd97-kube-api-access-zrwgm\") pod \"csi-hostpathplugin-dxj7b\" (UID: \"69e8c398-683b-47dc-a517-633d625cbd97\") " pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.386941 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.398930 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz8cc\" (UniqueName: \"kubernetes.io/projected/728793ed-1e89-455c-8d45-92c4ab08c1f6-kube-api-access-hz8cc\") pod \"multus-admission-controller-857f4d67dd-z82w8\" (UID: \"728793ed-1e89-455c-8d45-92c4ab08c1f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.400741 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.401254 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:28.901232195 +0000 UTC m=+152.417591258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.406042 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.415143 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8thp\" (UniqueName: \"kubernetes.io/projected/4f9ab75e-8898-4a0c-8630-c657450b648e-kube-api-access-s8thp\") pod \"migrator-59844c95c7-n5p8z\" (UID: \"4f9ab75e-8898-4a0c-8630-c657450b648e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.441094 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3267bf97-7e39-410a-8502-3737bfb7f963-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-54vjj\" (UID: \"3267bf97-7e39-410a-8502-3737bfb7f963\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.453052 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.459769 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wlj8d"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.467988 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9c2k\" (UniqueName: \"kubernetes.io/projected/445cb05c-ac1a-44a2-864f-a87e0e7b29a5-kube-api-access-f9c2k\") pod \"catalog-operator-68c6474976-8zrdj\" (UID: \"445cb05c-ac1a-44a2-864f-a87e0e7b29a5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.471735 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.471791 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.478260 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-dgt46" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.487056 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx9v6\" (UniqueName: \"kubernetes.io/projected/71acbaae-e241-4c8e-ac2b-6dd40b15b494-kube-api-access-lx9v6\") pod \"machine-config-controller-84d6567774-9bcck\" (UID: \"71acbaae-e241-4c8e-ac2b-6dd40b15b494\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.506320 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.508333 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.008315491 +0000 UTC m=+152.524674564 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.515093 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhkfd\" (UniqueName: \"kubernetes.io/projected/683fb061-dc67-431d-8a8a-d5a383794fef-kube-api-access-rhkfd\") pod \"dns-default-x2jlg\" (UID: \"683fb061-dc67-431d-8a8a-d5a383794fef\") " pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.519473 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d6f6cc0-7fc0-411c-800f-f98dc61b5035-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mggmj\" (UID: \"2d6f6cc0-7fc0-411c-800f-f98dc61b5035\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.533024 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.539899 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfvt4\" (UniqueName: \"kubernetes.io/projected/e20a6284-be62-4671-b75f-38b32dc20813-kube-api-access-vfvt4\") pod \"etcd-operator-b45778765-2lsb7\" (UID: \"e20a6284-be62-4671-b75f-38b32dc20813\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.565301 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx5pw\" (UniqueName: \"kubernetes.io/projected/8ce31dac-90ec-4aa8-b765-1ee1add26c2d-kube-api-access-hx5pw\") pod \"olm-operator-6b444d44fb-pd6wv\" (UID: \"8ce31dac-90ec-4aa8-b765-1ee1add26c2d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:28 crc kubenswrapper[4808]: W0217 15:56:28.570269 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8c0b903_63ed_4811_a991_9a5751a4c640.slice/crio-6a92581d96f5ce106de955d0377d19380dc8e249c7afa67d973cce7eda45abe9 WatchSource:0}: Error finding container 6a92581d96f5ce106de955d0377d19380dc8e249c7afa67d973cce7eda45abe9: Status 404 returned error can't find the container with id 6a92581d96f5ce106de955d0377d19380dc8e249c7afa67d973cce7eda45abe9 Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.572073 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.581452 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wns2k\" (UniqueName: \"kubernetes.io/projected/b7697c8e-8996-44b9-8b66-965584ab26e2-kube-api-access-wns2k\") pod \"packageserver-d55dfcdfc-bmq9l\" (UID: \"b7697c8e-8996-44b9-8b66-965584ab26e2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.600018 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.600367 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.606518 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df94p\" (UniqueName: \"kubernetes.io/projected/3ba06ea2-9714-49b5-8477-8eb056bb45a4-kube-api-access-df94p\") pod \"service-ca-9c57cc56f-bqslk\" (UID: \"3ba06ea2-9714-49b5-8477-8eb056bb45a4\") " pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.608381 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.609262 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.609643 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.109621701 +0000 UTC m=+152.625980774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.612410 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.618976 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdhmj\" (UniqueName: \"kubernetes.io/projected/b0793347-d948-480b-b5a7-d0fed7e12b38-kube-api-access-cdhmj\") pod \"marketplace-operator-79b997595-sbr84\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.623146 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.657269 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct4x8\" (UniqueName: \"kubernetes.io/projected/e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb-kube-api-access-ct4x8\") pod \"service-ca-operator-777779d784-jw4gs\" (UID: \"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.667806 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jcp4\" (UniqueName: \"kubernetes.io/projected/14c6770e-9659-4e77-a7f1-f3ef06ec332d-kube-api-access-5jcp4\") pod \"package-server-manager-789f6589d5-spzc7\" (UID: \"14c6770e-9659-4e77-a7f1-f3ef06ec332d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.674842 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.675199 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.677149 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4sbh\" (UniqueName: \"kubernetes.io/projected/b26b861c-ec52-4685-846c-ea022517e9fb-kube-api-access-t4sbh\") pod \"router-default-5444994796-jwcd2\" (UID: \"b26b861c-ec52-4685-846c-ea022517e9fb\") " pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.700375 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rng2l\" (UniqueName: \"kubernetes.io/projected/092b0577-f19f-413d-afc5-bdc3a40f7f75-kube-api-access-rng2l\") pod \"ingress-operator-5b745b69d9-8mjrc\" (UID: \"092b0577-f19f-413d-afc5-bdc3a40f7f75\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.711010 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.714107 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mxgf8"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.715392 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssq98\" (UniqueName: \"kubernetes.io/projected/9bca2625-c55d-4a28-b37d-2ac43d181e26-kube-api-access-ssq98\") pod \"ingress-canary-z4qfh\" (UID: \"9bca2625-c55d-4a28-b37d-2ac43d181e26\") " pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.716534 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.717077 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.217059818 +0000 UTC m=+152.733418891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.719948 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.730762 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.731491 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-hdg74"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.736671 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbwnc\" (UniqueName: \"kubernetes.io/projected/94f0bc0d-40c0-45b7-b6c4-7b285ba26c52-kube-api-access-bbwnc\") pod \"control-plane-machine-set-operator-78cbb6b69f-t8ws2\" (UID: \"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.739638 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.756888 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.757536 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.766359 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr4p7\" (UniqueName: \"kubernetes.io/projected/4b736927-813a-4b21-80d6-a0b4106e2c95-kube-api-access-fr4p7\") pod \"dns-operator-744455d44c-p8js4\" (UID: \"4b736927-813a-4b21-80d6-a0b4106e2c95\") " pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.766827 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.784604 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cvqck"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.789775 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.790084 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmv2c\" (UniqueName: \"kubernetes.io/projected/7baa3ebb-6bb0-4744-b096-971958bcd263-kube-api-access-gmv2c\") pod \"collect-profiles-29522385-74pvr\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:28 crc kubenswrapper[4808]: W0217 15:56:28.805498 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode489a46b_9123_44c6_94e0_692621760dd6.slice/crio-0209add398700228e0fcc883ac99d37768a000d7cf9532764ef3bc88a5c87df2 WatchSource:0}: Error finding container 0209add398700228e0fcc883ac99d37768a000d7cf9532764ef3bc88a5c87df2: Status 404 returned error can't find the container with id 0209add398700228e0fcc883ac99d37768a000d7cf9532764ef3bc88a5c87df2 Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.807979 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.810218 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlrx9\" (UniqueName: \"kubernetes.io/projected/0b9e5453-e92d-46cd-b8fb-c989f00809ae-kube-api-access-rlrx9\") pod \"kube-storage-version-migrator-operator-b67b599dd-vsl5p\" (UID: \"0b9e5453-e92d-46cd-b8fb-c989f00809ae\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.827712 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.828121 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.32810067 +0000 UTC m=+152.844459743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: W0217 15:56:28.829827 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25b3b271_e6e0_49c4_8fa2_17d8f8f2d5fa.slice/crio-ed1f4c6d6c88c4b4542456888ff4d284d0a9aa668f50172407b3b791503bd784 WatchSource:0}: Error finding container ed1f4c6d6c88c4b4542456888ff4d284d0a9aa668f50172407b3b791503bd784: Status 404 returned error can't find the container with id ed1f4c6d6c88c4b4542456888ff4d284d0a9aa668f50172407b3b791503bd784 Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.830363 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.836331 4808 csr.go:261] certificate signing request csr-s7dzb is approved, waiting to be issued Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.837195 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-z4qfh" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.850050 4808 csr.go:257] certificate signing request csr-s7dzb is issued Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.851284 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" Feb 17 15:56:28 crc kubenswrapper[4808]: W0217 15:56:28.855872 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7649915_6408_4c30_8faa_0fb3ea55007a.slice/crio-82fbd205cacd70de3bd72105fabd5651b63f3ef10de2b4bbb91392f1254ffcb7 WatchSource:0}: Error finding container 82fbd205cacd70de3bd72105fabd5651b63f3ef10de2b4bbb91392f1254ffcb7: Status 404 returned error can't find the container with id 82fbd205cacd70de3bd72105fabd5651b63f3ef10de2b4bbb91392f1254ffcb7 Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.862360 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.865559 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j6dgq"] Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.871189 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z82w8"] Feb 17 15:56:28 crc kubenswrapper[4808]: W0217 15:56:28.877804 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33978535_84b2_4def_af5a_d2819171e202.slice/crio-844de191c1be070d299b4c3076870b370dc0d9ba311dfdcbe654f429c1b19e41 WatchSource:0}: Error finding container 844de191c1be070d299b4c3076870b370dc0d9ba311dfdcbe654f429c1b19e41: Status 404 returned error can't find the container with id 844de191c1be070d299b4c3076870b370dc0d9ba311dfdcbe654f429c1b19e41 Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.879890 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.930093 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:28 crc kubenswrapper[4808]: E0217 15:56:28.930683 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.430661245 +0000 UTC m=+152.947020318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:28 crc kubenswrapper[4808]: I0217 15:56:28.994191 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.020833 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.031259 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.031492 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.531454401 +0000 UTC m=+153.047813474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.031741 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.032100 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.532085508 +0000 UTC m=+153.048444581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.135982 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.136267 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.636247825 +0000 UTC m=+153.152606898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.149312 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.149800 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.649784611 +0000 UTC m=+153.166143684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.259739 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" event={"ID":"d0ee93f1-93ac-4db2-b35e-5be5bded6541","Type":"ContainerStarted","Data":"e6ae78a7a3d903296ea675e4bc85775c5deb4343fce73afb22a46d2dd260eb2b"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.269922 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" event={"ID":"33978535-84b2-4def-af5a-d2819171e202","Type":"ContainerStarted","Data":"844de191c1be070d299b4c3076870b370dc0d9ba311dfdcbe654f429c1b19e41"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.269977 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" event={"ID":"8227d3a9-60f5-4d19-b4d1-8a0143864837","Type":"ContainerStarted","Data":"87a30c2a90c4016dabeb2fd3e6331db8b801e3a30d3bec36b1482acb813df460"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.269994 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" event={"ID":"681a57d4-bd74-4910-a3f3-517b96a15123","Type":"ContainerStarted","Data":"321947dd480cd7b15b3faa5a3e64c3d9f25bd01d43547606487454ebdfe13c32"} Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.260382 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.760365342 +0000 UTC m=+153.276724415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.260317 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.270454 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.271191 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.771175594 +0000 UTC m=+153.287534667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.283090 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" event={"ID":"c8c0b903-63ed-4811-a991-9a5751a4c640","Type":"ContainerStarted","Data":"0efbd4b20b52726670445669c69fa3d84a33cf7a9a1513f4adf2847935e90206"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.283151 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" event={"ID":"c8c0b903-63ed-4811-a991-9a5751a4c640","Type":"ContainerStarted","Data":"6a92581d96f5ce106de955d0377d19380dc8e249c7afa67d973cce7eda45abe9"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.287890 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dgt46" event={"ID":"fddf9ec8-447f-487c-a863-73ec68b90737","Type":"ContainerStarted","Data":"4fc09a408ae428519ff850f04e1ece64a9e06a09d945240a4178e82219634ddd"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.291467 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.295881 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" event={"ID":"728793ed-1e89-455c-8d45-92c4ab08c1f6","Type":"ContainerStarted","Data":"62720599c23d59a119c24066564cef1ed432a3f75bd093a41ebedb1728306105"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.297831 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wlj8d" event={"ID":"116ae5bc-cf7e-45ad-9800-501bcfc04ff7","Type":"ContainerStarted","Data":"58c8b94806c545d56a550be6d5318f72da5d4f264e00031f9559fbabcc901c8a"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.297865 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wlj8d" event={"ID":"116ae5bc-cf7e-45ad-9800-501bcfc04ff7","Type":"ContainerStarted","Data":"e9fd786b7fdde5022035c172a3376a3a0c0e9583045af8d035ac7dc1cd54b6fb"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.299754 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-wlj8d" Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.303824 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-wlj8d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.303928 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wlj8d" podUID="116ae5bc-cf7e-45ad-9800-501bcfc04ff7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.315583 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2lsb7"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.316369 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.327563 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" event={"ID":"9c7096e1-8ca1-483d-8e12-1cc79d28182a","Type":"ContainerStarted","Data":"4540b69253f3420e20d6978d9585183bf8fdbe0b979b02a5c2377a9b2a29ace6"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.329871 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hdg74" event={"ID":"e489a46b-9123-44c6-94e0-692621760dd6","Type":"ContainerStarted","Data":"0209add398700228e0fcc883ac99d37768a000d7cf9532764ef3bc88a5c87df2"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.335887 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" event={"ID":"a7649915-6408-4c30-8faa-0fb3ea55007a","Type":"ContainerStarted","Data":"82fbd205cacd70de3bd72105fabd5651b63f3ef10de2b4bbb91392f1254ffcb7"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.345784 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" event={"ID":"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa","Type":"ContainerStarted","Data":"ed1f4c6d6c88c4b4542456888ff4d284d0a9aa668f50172407b3b791503bd784"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.353535 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" event={"ID":"0131c573-bf76-49f4-9581-dd39ef60b27f","Type":"ContainerStarted","Data":"767ad1226894880b9a5000e35b613fef9ade48f52d41faa5fb859779ef7a64fc"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.353652 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" event={"ID":"0131c573-bf76-49f4-9581-dd39ef60b27f","Type":"ContainerStarted","Data":"fbddaaafdcb10be11c9a676fc963e5e0d238265a4f79731f8f5f177d19ba9003"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.364554 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" event={"ID":"26fa95d4-8240-472a-a86f-98acf35ade67","Type":"ContainerStarted","Data":"30ad84d9d762a2a57f1e25cb2a8142689ce9b165ac2b500002cff9aadc52f08a"} Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.376275 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.377891 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.877868081 +0000 UTC m=+153.394227154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.445741 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.453460 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-jlwrb" podStartSLOduration=133.453431024 podStartE2EDuration="2m13.453431024s" podCreationTimestamp="2026-02-17 15:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:29.453247068 +0000 UTC m=+152.969606141" watchObservedRunningTime="2026-02-17 15:56:29.453431024 +0000 UTC m=+152.969790107" Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.471106 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dxj7b"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.477881 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.479473 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:29.979452628 +0000 UTC m=+153.495811701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.585428 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.586116 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.086098502 +0000 UTC m=+153.602457575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: W0217 15:56:29.636383 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb26b861c_ec52_4685_846c_ea022517e9fb.slice/crio-3f79d6b9fcdc485bbc4f2a9c50e5848aa8428ba8b850f9b53eead931b8bbe676 WatchSource:0}: Error finding container 3f79d6b9fcdc485bbc4f2a9c50e5848aa8428ba8b850f9b53eead931b8bbe676: Status 404 returned error can't find the container with id 3f79d6b9fcdc485bbc4f2a9c50e5848aa8428ba8b850f9b53eead931b8bbe676 Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.688870 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.689259 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.189245402 +0000 UTC m=+153.705604475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.701616 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" podStartSLOduration=131.701592696 podStartE2EDuration="2m11.701592696s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:29.700848306 +0000 UTC m=+153.217207399" watchObservedRunningTime="2026-02-17 15:56:29.701592696 +0000 UTC m=+153.217951769" Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.752222 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.790853 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.791661 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.291636571 +0000 UTC m=+153.807995644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.830038 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbr84"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.832713 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bqslk"] Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.855214 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-17 15:51:28 +0000 UTC, rotation deadline is 2026-11-29 09:10:09.7865851 +0000 UTC Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.855303 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6833h13m39.931284957s for next certificate rotation Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.893755 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:29 crc kubenswrapper[4808]: E0217 15:56:29.904132 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.404108973 +0000 UTC m=+153.920468046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:29 crc kubenswrapper[4808]: I0217 15:56:29.942848 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj"] Feb 17 15:56:29 crc kubenswrapper[4808]: W0217 15:56:29.943090 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ba06ea2_9714_49b5_8477_8eb056bb45a4.slice/crio-02dd5d7b58edf49fd1e85175a803d2a8024bd4a6a6c96449839f3d310f3b9d42 WatchSource:0}: Error finding container 02dd5d7b58edf49fd1e85175a803d2a8024bd4a6a6c96449839f3d310f3b9d42: Status 404 returned error can't find the container with id 02dd5d7b58edf49fd1e85175a803d2a8024bd4a6a6c96449839f3d310f3b9d42 Feb 17 15:56:29 crc kubenswrapper[4808]: W0217 15:56:29.999678 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3267bf97_7e39_410a_8502_3737bfb7f963.slice/crio-535ae32eb6f2ea3ba0ed154b1b92dca3d81d27d6eb74531225f25eb06233123c WatchSource:0}: Error finding container 535ae32eb6f2ea3ba0ed154b1b92dca3d81d27d6eb74531225f25eb06233123c: Status 404 returned error can't find the container with id 535ae32eb6f2ea3ba0ed154b1b92dca3d81d27d6eb74531225f25eb06233123c Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.002113 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.002637 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.502606578 +0000 UTC m=+154.018965651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.013723 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.020147 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.028689 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.096929 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-srhjb" podStartSLOduration=132.096903318 podStartE2EDuration="2m12.096903318s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.094571374 +0000 UTC m=+153.610930447" watchObservedRunningTime="2026-02-17 15:56:30.096903318 +0000 UTC m=+153.613262391" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.105901 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.106304 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.606288402 +0000 UTC m=+154.122647475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.108955 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x2jlg"] Feb 17 15:56:30 crc kubenswrapper[4808]: W0217 15:56:30.122605 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod092b0577_f19f_413d_afc5_bdc3a40f7f75.slice/crio-faf7562009ff6319cf2977233e4d63812224f9df6b0fc904ad604c768dd6d53b WatchSource:0}: Error finding container faf7562009ff6319cf2977233e4d63812224f9df6b0fc904ad604c768dd6d53b: Status 404 returned error can't find the container with id faf7562009ff6319cf2977233e4d63812224f9df6b0fc904ad604c768dd6d53b Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.207167 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.207501 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.707480919 +0000 UTC m=+154.223839992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.213887 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-4x6s2" podStartSLOduration=133.213859121 podStartE2EDuration="2m13.213859121s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.208482976 +0000 UTC m=+153.724842049" watchObservedRunningTime="2026-02-17 15:56:30.213859121 +0000 UTC m=+153.730218204" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.218145 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.264228 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cbwrs" podStartSLOduration=133.264206633 podStartE2EDuration="2m13.264206633s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.251446697 +0000 UTC m=+153.767805780" watchObservedRunningTime="2026-02-17 15:56:30.264206633 +0000 UTC m=+153.780565696" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.276913 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.287515 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.300093 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-cg82l" podStartSLOduration=133.300066553 podStartE2EDuration="2m13.300066553s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.299083816 +0000 UTC m=+153.815442909" watchObservedRunningTime="2026-02-17 15:56:30.300066553 +0000 UTC m=+153.816425626" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.313927 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.314351 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.814337209 +0000 UTC m=+154.330696282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: W0217 15:56:30.332139 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94f0bc0d_40c0_45b7_b6c4_7b285ba26c52.slice/crio-bc6385422873ea61f34adbdf29b40165c69ab9207cdde9aa47560a45b2135def WatchSource:0}: Error finding container bc6385422873ea61f34adbdf29b40165c69ab9207cdde9aa47560a45b2135def: Status 404 returned error can't find the container with id bc6385422873ea61f34adbdf29b40165c69ab9207cdde9aa47560a45b2135def Feb 17 15:56:30 crc kubenswrapper[4808]: W0217 15:56:30.346715 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7697c8e_8996_44b9_8b66_965584ab26e2.slice/crio-3d1ce1fbdbd9f0e5d8ef9187f84ba7865c9ffbb5a8858fa3a293eb024ef93b21 WatchSource:0}: Error finding container 3d1ce1fbdbd9f0e5d8ef9187f84ba7865c9ffbb5a8858fa3a293eb024ef93b21: Status 404 returned error can't find the container with id 3d1ce1fbdbd9f0e5d8ef9187f84ba7865c9ffbb5a8858fa3a293eb024ef93b21 Feb 17 15:56:30 crc kubenswrapper[4808]: W0217 15:56:30.358454 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14c6770e_9659_4e77_a7f1_f3ef06ec332d.slice/crio-fe3c487b77200b515c446e5bb7350cae13ed5f93ef4fbaf06e4463c9ea364a37 WatchSource:0}: Error finding container fe3c487b77200b515c446e5bb7350cae13ed5f93ef4fbaf06e4463c9ea364a37: Status 404 returned error can't find the container with id fe3c487b77200b515c446e5bb7350cae13ed5f93ef4fbaf06e4463c9ea364a37 Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.358516 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-p8js4"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.359604 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.380490 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.403235 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-z4qfh"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.404340 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-wlj8d" podStartSLOduration=133.404319163 podStartE2EDuration="2m13.404319163s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.390815737 +0000 UTC m=+153.907174810" watchObservedRunningTime="2026-02-17 15:56:30.404319163 +0000 UTC m=+153.920678236" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.407041 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p"] Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.415437 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.416140 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:30.916120311 +0000 UTC m=+154.432479384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.463118 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" event={"ID":"e20a6284-be62-4671-b75f-38b32dc20813","Type":"ContainerStarted","Data":"46488ee8d17bd26171359dd8a8e243ec82f66e1f7ec6373f1973739186bb8608"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.514644 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" event={"ID":"728793ed-1e89-455c-8d45-92c4ab08c1f6","Type":"ContainerStarted","Data":"0e2295ac419b2dc097f140848b76ed1756cacf4b44747f5a97fc1cfe0a8b9711"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.518216 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.572261 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.072211754 +0000 UTC m=+154.588570827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.586321 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" event={"ID":"8227d3a9-60f5-4d19-b4d1-8a0143864837","Type":"ContainerStarted","Data":"f98437fbbf139d63581f07e82442459bd2916424cb75fd60caf9d2b40747e184"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.586991 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.599661 4808 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j6vm5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.599735 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" podUID="8227d3a9-60f5-4d19-b4d1-8a0143864837" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.614915 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" event={"ID":"69e8c398-683b-47dc-a517-633d625cbd97","Type":"ContainerStarted","Data":"7fcc3e4b3e72a540ddfc1939e87ac4ce7d3bb78661a8bb6f21a95f2e2afecfda"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.625468 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.626095 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.12605284 +0000 UTC m=+154.642411913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.679422 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" event={"ID":"71acbaae-e241-4c8e-ac2b-6dd40b15b494","Type":"ContainerStarted","Data":"3e89b193f707b0cb6f40ddbd3be40b4434d71dfa91333f6a8492228f51982188"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.680027 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" event={"ID":"71acbaae-e241-4c8e-ac2b-6dd40b15b494","Type":"ContainerStarted","Data":"fd73c63544ba33b7f4743f37f0b3438c023b57fcaebfe84fe6a81d3d921660d5"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.727377 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.728682 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.228663545 +0000 UTC m=+154.745022618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.747340 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" event={"ID":"445cb05c-ac1a-44a2-864f-a87e0e7b29a5","Type":"ContainerStarted","Data":"042396a13a5329504a1fae70fc09bdfe2ab24d3cc60fa07dfc947083a18771e6"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.749206 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" event={"ID":"4f9ab75e-8898-4a0c-8630-c657450b648e","Type":"ContainerStarted","Data":"65cd2ca01645fae2a06426f9da167fcadb7900d0665e3ff976914945a22ae214"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.758170 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2jlg" event={"ID":"683fb061-dc67-431d-8a8a-d5a383794fef","Type":"ContainerStarted","Data":"9f060864e83d276fe705e23e0395af9e9048caed59a1822022d020e0a81836fa"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.760993 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" event={"ID":"14c6770e-9659-4e77-a7f1-f3ef06ec332d","Type":"ContainerStarted","Data":"fe3c487b77200b515c446e5bb7350cae13ed5f93ef4fbaf06e4463c9ea364a37"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.766566 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" event={"ID":"25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa","Type":"ContainerStarted","Data":"507800b9841cc80b1865f606d7f977e50047f1cac5275561e18d7592e1f64531"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.766670 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.769654 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" event={"ID":"9c7096e1-8ca1-483d-8e12-1cc79d28182a","Type":"ContainerStarted","Data":"20bff2b811aa836fd61417fa647f37f9de8e986a28076ef932a459fc43055c3e"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.770661 4808 patch_prober.go:28] interesting pod/console-operator-58897d9998-mxgf8 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.770703 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" podUID="25b3b271-e6e0-49c4-8fa2-17d8f8f2d5fa" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.776482 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" event={"ID":"98bde021-9860-4b02-9223-512db6787eff","Type":"ContainerStarted","Data":"1e875ee300c0488d8291c56021229aac4c3401a41ad1f2d3dc23a2913df4c895"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.776525 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" event={"ID":"98bde021-9860-4b02-9223-512db6787eff","Type":"ContainerStarted","Data":"c09ec5e2ee88b663934e8350a60d6fbc3a441771d379f75fb2671fa0bb4feda0"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.788352 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" event={"ID":"3267bf97-7e39-410a-8502-3737bfb7f963","Type":"ContainerStarted","Data":"535ae32eb6f2ea3ba0ed154b1b92dca3d81d27d6eb74531225f25eb06233123c"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.792320 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" event={"ID":"a7649915-6408-4c30-8faa-0fb3ea55007a","Type":"ContainerStarted","Data":"fb57ffbad5715668e0b26cf285ebec4d01aad8ac4a4db782b62b453c180c8e47"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.792689 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.795486 4808 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cvqck container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.795560 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" podUID="a7649915-6408-4c30-8faa-0fb3ea55007a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.800070 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-dgt46" event={"ID":"fddf9ec8-447f-487c-a863-73ec68b90737","Type":"ContainerStarted","Data":"e85c9b5aaeb7b5b5a0c652c7848594f38267be8786ae7c4e2293038778dbf6fb"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.806873 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" event={"ID":"3ba06ea2-9714-49b5-8477-8eb056bb45a4","Type":"ContainerStarted","Data":"02dd5d7b58edf49fd1e85175a803d2a8024bd4a6a6c96449839f3d310f3b9d42"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.818384 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" event={"ID":"b0793347-d948-480b-b5a7-d0fed7e12b38","Type":"ContainerStarted","Data":"026165e1bd109fad794dffddae09d3e255a5318f60f94f71f305c72e7d4ac00e"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.829380 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.829541 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.329507633 +0000 UTC m=+154.845866706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.829971 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.830620 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.330600782 +0000 UTC m=+154.846959845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.833187 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" event={"ID":"0131c573-bf76-49f4-9581-dd39ef60b27f","Type":"ContainerStarted","Data":"71d3523977c68d7be7a0fd789fd9343dd3bcfe2e002a98f8e88fb2e3a9cfcd13"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.842229 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" event={"ID":"26fa95d4-8240-472a-a86f-98acf35ade67","Type":"ContainerStarted","Data":"4d0b6ff7e08b05b7d2862bcc5291ffbb8e1e202799902c4edd8fb74af81ab746"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.849507 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" event={"ID":"b8b124f4-97ab-4512-a1a2-b93bc4e724e8","Type":"ContainerStarted","Data":"551a33e50c7398d763eee1244f86da9b8f2ba2e4db083390f8a3e5f9c52519f2"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.867274 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" event={"ID":"d0ee93f1-93ac-4db2-b35e-5be5bded6541","Type":"ContainerStarted","Data":"306d019fd0a960ebe596dd62bde91fac66d83ac96ee596dcc0dcc7215c74b83c"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.882315 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" event={"ID":"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52","Type":"ContainerStarted","Data":"bc6385422873ea61f34adbdf29b40165c69ab9207cdde9aa47560a45b2135def"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.903115 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" podStartSLOduration=132.903088333 podStartE2EDuration="2m12.903088333s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.901370496 +0000 UTC m=+154.417729569" watchObservedRunningTime="2026-02-17 15:56:30.903088333 +0000 UTC m=+154.419447406" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.931143 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:30 crc kubenswrapper[4808]: E0217 15:56:30.932507 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.432474448 +0000 UTC m=+154.948833511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.941460 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jwcd2" event={"ID":"b26b861c-ec52-4685-846c-ea022517e9fb","Type":"ContainerStarted","Data":"3f79d6b9fcdc485bbc4f2a9c50e5848aa8428ba8b850f9b53eead931b8bbe676"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.944524 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" event={"ID":"092b0577-f19f-413d-afc5-bdc3a40f7f75","Type":"ContainerStarted","Data":"faf7562009ff6319cf2977233e4d63812224f9df6b0fc904ad604c768dd6d53b"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.963786 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hdg74" event={"ID":"e489a46b-9123-44c6-94e0-692621760dd6","Type":"ContainerStarted","Data":"5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a"} Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.983552 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-dgt46" podStartSLOduration=5.983530569 podStartE2EDuration="5.983530569s" podCreationTimestamp="2026-02-17 15:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.931797379 +0000 UTC m=+154.448156452" watchObservedRunningTime="2026-02-17 15:56:30.983530569 +0000 UTC m=+154.499889642" Feb 17 15:56:30 crc kubenswrapper[4808]: I0217 15:56:30.999393 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" event={"ID":"8ce31dac-90ec-4aa8-b765-1ee1add26c2d","Type":"ContainerStarted","Data":"0fc5e8095e93cd2824fbf14d2c5476e057998ed4379d9831be2286540517c16b"} Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.004919 4808 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pd6wv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.004962 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" podUID="8ce31dac-90ec-4aa8-b765-1ee1add26c2d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.005163 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.018627 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" podStartSLOduration=134.018597307 podStartE2EDuration="2m14.018597307s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:30.986130089 +0000 UTC m=+154.502489162" watchObservedRunningTime="2026-02-17 15:56:31.018597307 +0000 UTC m=+154.534956380" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.035160 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.037448 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.537416976 +0000 UTC m=+155.053776049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.038855 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" event={"ID":"b7697c8e-8996-44b9-8b66-965584ab26e2","Type":"ContainerStarted","Data":"3d1ce1fbdbd9f0e5d8ef9187f84ba7865c9ffbb5a8858fa3a293eb024ef93b21"} Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.044152 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" event={"ID":"2d6f6cc0-7fc0-411c-800f-f98dc61b5035","Type":"ContainerStarted","Data":"5fe85b50798642cca4b4739ce6cf54363e8a0a7f3426dba4efea7f36d163df35"} Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.045908 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-wlj8d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.045983 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wlj8d" podUID="116ae5bc-cf7e-45ad-9800-501bcfc04ff7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.055264 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bz4bz" podStartSLOduration=134.055125255 podStartE2EDuration="2m14.055125255s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.050410898 +0000 UTC m=+154.566769971" watchObservedRunningTime="2026-02-17 15:56:31.055125255 +0000 UTC m=+154.571484318" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.106108 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" podStartSLOduration=133.106072722 podStartE2EDuration="2m13.106072722s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.104325846 +0000 UTC m=+154.620684919" watchObservedRunningTime="2026-02-17 15:56:31.106072722 +0000 UTC m=+154.622431795" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.138914 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.140619 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.640599447 +0000 UTC m=+155.156958520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.140652 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" podStartSLOduration=134.140639488 podStartE2EDuration="2m14.140639488s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.138220202 +0000 UTC m=+154.654579275" watchObservedRunningTime="2026-02-17 15:56:31.140639488 +0000 UTC m=+154.656998561" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.179349 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-9l858" podStartSLOduration=134.179323524 podStartE2EDuration="2m14.179323524s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.1765893 +0000 UTC m=+154.692948373" watchObservedRunningTime="2026-02-17 15:56:31.179323524 +0000 UTC m=+154.695682607" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.230200 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" podStartSLOduration=134.23017813 podStartE2EDuration="2m14.23017813s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.227332373 +0000 UTC m=+154.743691446" watchObservedRunningTime="2026-02-17 15:56:31.23017813 +0000 UTC m=+154.746537193" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.240686 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.241070 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.741054204 +0000 UTC m=+155.257413277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.265304 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" podStartSLOduration=134.265280019 podStartE2EDuration="2m14.265280019s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.264287602 +0000 UTC m=+154.780646685" watchObservedRunningTime="2026-02-17 15:56:31.265280019 +0000 UTC m=+154.781639092" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.300955 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-hdg74" podStartSLOduration=134.300928143 podStartE2EDuration="2m14.300928143s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.299015071 +0000 UTC m=+154.815374154" watchObservedRunningTime="2026-02-17 15:56:31.300928143 +0000 UTC m=+154.817287216" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.345623 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.345978 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.845956031 +0000 UTC m=+155.362315104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.395429 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" podStartSLOduration=133.395404719 podStartE2EDuration="2m13.395404719s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.351906572 +0000 UTC m=+154.868265645" watchObservedRunningTime="2026-02-17 15:56:31.395404719 +0000 UTC m=+154.911763792" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.395915 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-jwcd2" podStartSLOduration=134.395910512 podStartE2EDuration="2m14.395910512s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:31.394667459 +0000 UTC m=+154.911026542" watchObservedRunningTime="2026-02-17 15:56:31.395910512 +0000 UTC m=+154.912269585" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.453542 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.454200 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:31.954178768 +0000 UTC m=+155.470537841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.555217 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.555694 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.055674413 +0000 UTC m=+155.572033496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.656984 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.657830 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.157807305 +0000 UTC m=+155.674166368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.713071 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.725322 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.725415 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.744956 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.745359 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.762515 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.762763 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.262738814 +0000 UTC m=+155.779097907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.762951 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.763373 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.26336434 +0000 UTC m=+155.779723403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.828867 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.829789 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.843527 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.864499 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.864952 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.364930487 +0000 UTC m=+155.881289570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:31 crc kubenswrapper[4808]: I0217 15:56:31.966936 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:31 crc kubenswrapper[4808]: E0217 15:56:31.967507 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.467483041 +0000 UTC m=+155.983842114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.069695 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.070085 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.570066256 +0000 UTC m=+156.086425329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.109913 4808 generic.go:334] "Generic (PLEG): container finished" podID="98bde021-9860-4b02-9223-512db6787eff" containerID="1e875ee300c0488d8291c56021229aac4c3401a41ad1f2d3dc23a2913df4c895" exitCode=0 Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.110021 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" event={"ID":"98bde021-9860-4b02-9223-512db6787eff","Type":"ContainerDied","Data":"1e875ee300c0488d8291c56021229aac4c3401a41ad1f2d3dc23a2913df4c895"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.158395 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" event={"ID":"4f9ab75e-8898-4a0c-8630-c657450b648e","Type":"ContainerStarted","Data":"f1af7eaa0f66662d226a2eaafb6575bc4d9168c89ee24fef058ac5d4fe51291e"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.171407 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" event={"ID":"b0793347-d948-480b-b5a7-d0fed7e12b38","Type":"ContainerStarted","Data":"1c4f11a7931bfb6c7e6734178fd2038fdd115a2788998f8ef169fbd7407cf6d2"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.171420 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.171800 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.671782477 +0000 UTC m=+156.188141550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.172537 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.175315 4808 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sbr84 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.175368 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.204977 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" event={"ID":"b7697c8e-8996-44b9-8b66-965584ab26e2","Type":"ContainerStarted","Data":"a95102ed9187227caa549de5d8578d98e8c9e0e5d26a212f6a25f3bd1988b467"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.207237 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.208031 4808 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bmq9l container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.208092 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" podUID="b7697c8e-8996-44b9-8b66-965584ab26e2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.236063 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cw29n" event={"ID":"26fa95d4-8240-472a-a86f-98acf35ade67","Type":"ContainerStarted","Data":"0826966d6d87149771be9ceb8e0a5daef9d5f2fe2ed88b1c8fb880f6e9c0614c"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.260199 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" event={"ID":"e20a6284-be62-4671-b75f-38b32dc20813","Type":"ContainerStarted","Data":"3a7f1bc676889c728bffbdbaee82723db47d3e80b3bd5883c8088aa6580ee1e7"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.270718 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" podStartSLOduration=134.270697082 podStartE2EDuration="2m14.270697082s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.204611225 +0000 UTC m=+155.720970318" watchObservedRunningTime="2026-02-17 15:56:32.270697082 +0000 UTC m=+155.787056155" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.270926 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" event={"ID":"b8b124f4-97ab-4512-a1a2-b93bc4e724e8","Type":"ContainerStarted","Data":"8920c9f68a2dada17aac710b71d1b8e3fde3fcfe0616a9282fef97145c312ea8"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.274747 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.274877 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.774861025 +0000 UTC m=+156.291220118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.275671 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.775652806 +0000 UTC m=+156.292011869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.275165 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.322297 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" event={"ID":"8ce31dac-90ec-4aa8-b765-1ee1add26c2d","Type":"ContainerStarted","Data":"4b2be5da98db133479da22cad2f9c7b90db7982322b06e78a4e711739d997cb8"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.324098 4808 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pd6wv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.324149 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" podUID="8ce31dac-90ec-4aa8-b765-1ee1add26c2d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.394287 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.395617 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:32.895597441 +0000 UTC m=+156.411956514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.397943 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" event={"ID":"3ba06ea2-9714-49b5-8477-8eb056bb45a4","Type":"ContainerStarted","Data":"41ed52098133b44c5c9e31150d6c9aa64c662fbf8019ef662f732bcca8867818"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.403307 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2jlg" event={"ID":"683fb061-dc67-431d-8a8a-d5a383794fef","Type":"ContainerStarted","Data":"140a91348592f9d5be82cb0c14961712188766e6c7cef5c96331471907718163"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.422154 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mggmj" event={"ID":"2d6f6cc0-7fc0-411c-800f-f98dc61b5035","Type":"ContainerStarted","Data":"5aa559ed0747a6b2ab13d8fac6a52f35c01eb0325ebaa5a2cb811a356cb86be1"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.438760 4808 patch_prober.go:28] interesting pod/apiserver-76f77b778f-7jp8q container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]log ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]etcd ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/generic-apiserver-start-informers ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/max-in-flight-filter ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 17 15:56:32 crc kubenswrapper[4808]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 17 15:56:32 crc kubenswrapper[4808]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/project.openshift.io-projectcache ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/openshift.io-startinformers ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 17 15:56:32 crc kubenswrapper[4808]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 17 15:56:32 crc kubenswrapper[4808]: livez check failed Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.438844 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" podUID="d0ee93f1-93ac-4db2-b35e-5be5bded6541" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.444749 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" podStartSLOduration=134.444731829 podStartE2EDuration="2m14.444731829s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.280181339 +0000 UTC m=+155.796540412" watchObservedRunningTime="2026-02-17 15:56:32.444731829 +0000 UTC m=+155.961090902" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.455986 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" event={"ID":"7baa3ebb-6bb0-4744-b096-971958bcd263","Type":"ContainerStarted","Data":"4636e3a05a4f1b63b0a37839e73e790b55d96dd321273848e2dfb3f38193ea44"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.456481 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" event={"ID":"7baa3ebb-6bb0-4744-b096-971958bcd263","Type":"ContainerStarted","Data":"b07a627c0e44e85d03382e77fdbb6e3a6fef1ba1b49d24c7a30b720a10a8ce6d"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.483912 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" event={"ID":"71acbaae-e241-4c8e-ac2b-6dd40b15b494","Type":"ContainerStarted","Data":"045401e7538b14d1ef3741ef7fcf9686f582e526e1fe704e011788219910ffe7"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.498074 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.501157 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.001134955 +0000 UTC m=+156.517494018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.519809 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" event={"ID":"092b0577-f19f-413d-afc5-bdc3a40f7f75","Type":"ContainerStarted","Data":"ecd09fc45743a6f9fc3cebcbe467096f9f07928922d13c4afa26394c7b053c73"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.519868 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" event={"ID":"092b0577-f19f-413d-afc5-bdc3a40f7f75","Type":"ContainerStarted","Data":"22ba8a60fb5ca2d89b7a16fec0516beb65d2ea05ef0a7f8d733398a77d340355"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.523460 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2lsb7" podStartSLOduration=135.523432228 podStartE2EDuration="2m15.523432228s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.445206193 +0000 UTC m=+155.961565266" watchObservedRunningTime="2026-02-17 15:56:32.523432228 +0000 UTC m=+156.039791301" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.539727 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" event={"ID":"33978535-84b2-4def-af5a-d2819171e202","Type":"ContainerStarted","Data":"a1afe1988306793eee4a68327c90d6c1337c9d7cc71b57771cb662e2ecc6eca8"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.540849 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.577425 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" event={"ID":"445cb05c-ac1a-44a2-864f-a87e0e7b29a5","Type":"ContainerStarted","Data":"72bc4c8d24437e9e749d7d4bcd97db5d12fdae8924c3ed3363c14461f3b2b8dd"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.578454 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.607402 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.608171 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-9bcck" podStartSLOduration=134.60815053 podStartE2EDuration="2m14.60815053s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.606761852 +0000 UTC m=+156.123120925" watchObservedRunningTime="2026-02-17 15:56:32.60815053 +0000 UTC m=+156.124509603" Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.609083 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.109056984 +0000 UTC m=+156.625416057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.609301 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-lzvjs" podStartSLOduration=135.60929358 podStartE2EDuration="2m15.60929358s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.524005124 +0000 UTC m=+156.040364197" watchObservedRunningTime="2026-02-17 15:56:32.60929358 +0000 UTC m=+156.125652653" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.608210 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.611829 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" event={"ID":"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb","Type":"ContainerStarted","Data":"4dda1c6fa752ebf39aad20ebafc91a0bdacb7ea3eda95ca701959d2729712306"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.611874 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" event={"ID":"e8aed8e7-df36-4a82-a7d6-8a65d9a28eeb","Type":"ContainerStarted","Data":"82c7b8498052c7db6301b6c7d381474378ef0fd0d5b7fab82d60f602abb43e6f"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.629071 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" event={"ID":"14c6770e-9659-4e77-a7f1-f3ef06ec332d","Type":"ContainerStarted","Data":"72c20f12164ebf86d6f323fb2ad21fd775ed7625f202920a874c45d32d619b74"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.629905 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.653497 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" event={"ID":"94f0bc0d-40c0-45b7-b6c4-7b285ba26c52","Type":"ContainerStarted","Data":"1bbca72abc7557abc6c4328ff389a7c0fb8106ba97b69e12d3ae85589a684f81"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.668807 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-bqslk" podStartSLOduration=134.668783269 podStartE2EDuration="2m14.668783269s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.668638525 +0000 UTC m=+156.184997608" watchObservedRunningTime="2026-02-17 15:56:32.668783269 +0000 UTC m=+156.185142342" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.668989 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" event={"ID":"0b9e5453-e92d-46cd-b8fb-c989f00809ae","Type":"ContainerStarted","Data":"9f5dabab73befbc735ecb4209850931ff7234f5cccba6b61340a80ac7fbbbb27"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.669037 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" event={"ID":"0b9e5453-e92d-46cd-b8fb-c989f00809ae","Type":"ContainerStarted","Data":"c1bb38e7834b1e3cca31499b884f983387b4c32fdcbfdd54789bcf688dc501ea"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.709208 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" podStartSLOduration=135.709185352 podStartE2EDuration="2m15.709185352s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.706907531 +0000 UTC m=+156.223266604" watchObservedRunningTime="2026-02-17 15:56:32.709185352 +0000 UTC m=+156.225544425" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.709288 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" event={"ID":"4b736927-813a-4b21-80d6-a0b4106e2c95","Type":"ContainerStarted","Data":"55a0f5580ac0a9a8933f18ea49236a08177ca4b4ae0093a0452031393efe2bcc"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.709339 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" event={"ID":"4b736927-813a-4b21-80d6-a0b4106e2c95","Type":"ContainerStarted","Data":"3f615bb48b49156af7952e03fd9d3dfd72050ff4da2c586b454560e08dea8345"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.710476 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.712674 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.212661606 +0000 UTC m=+156.729020679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.726321 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" event={"ID":"3267bf97-7e39-410a-8502-3737bfb7f963","Type":"ContainerStarted","Data":"f9cda0bd85d70f2bb040be7aa45aad29ac3dcd5bbc8469e158ce44f2db1d2b3c"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.738347 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:56:32 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Feb 17 15:56:32 crc kubenswrapper[4808]: [+]process-running ok Feb 17 15:56:32 crc kubenswrapper[4808]: healthz check failed Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.738450 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.741685 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-jwcd2" event={"ID":"b26b861c-ec52-4685-846c-ea022517e9fb","Type":"ContainerStarted","Data":"03010ae54b2a47c5cbf745bb4ec8340b35db2e76f02b8106933962c3f82cc328"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.747696 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8mjrc" podStartSLOduration=135.747679223 podStartE2EDuration="2m15.747679223s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.746186133 +0000 UTC m=+156.262545206" watchObservedRunningTime="2026-02-17 15:56:32.747679223 +0000 UTC m=+156.264038286" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.756121 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z4qfh" event={"ID":"9bca2625-c55d-4a28-b37d-2ac43d181e26","Type":"ContainerStarted","Data":"7e31de47cf5c126931a9310c441850afa6ddd8361e63e6ea7b4760988d17591f"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.756211 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-z4qfh" event={"ID":"9bca2625-c55d-4a28-b37d-2ac43d181e26","Type":"ContainerStarted","Data":"9799a3d840179bd0f9bd6c405739949ce024d0e6d6998a0d416443b4c98e0d5f"} Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.765715 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.766281 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-k48nr" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.779872 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.812773 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" podStartSLOduration=135.812756584 podStartE2EDuration="2m15.812756584s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.812264201 +0000 UTC m=+156.328623274" watchObservedRunningTime="2026-02-17 15:56:32.812756584 +0000 UTC m=+156.329115657" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.816845 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.817037 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.316997738 +0000 UTC m=+156.833356811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.817247 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.819402 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.319388173 +0000 UTC m=+156.835747246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.892944 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-54vjj" podStartSLOduration=135.892922181 podStartE2EDuration="2m15.892922181s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.854869893 +0000 UTC m=+156.371228966" watchObservedRunningTime="2026-02-17 15:56:32.892922181 +0000 UTC m=+156.409281244" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.895470 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8zrdj" podStartSLOduration=134.89546175 podStartE2EDuration="2m14.89546175s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.89250251 +0000 UTC m=+156.408861583" watchObservedRunningTime="2026-02-17 15:56:32.89546175 +0000 UTC m=+156.411820823" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.918253 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.918772 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.41872729 +0000 UTC m=+156.935086373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.919789 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.920772 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t8ws2" podStartSLOduration=134.920761274 podStartE2EDuration="2m14.920761274s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.917960299 +0000 UTC m=+156.434319372" watchObservedRunningTime="2026-02-17 15:56:32.920761274 +0000 UTC m=+156.437120347" Feb 17 15:56:32 crc kubenswrapper[4808]: E0217 15:56:32.924137 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.424118385 +0000 UTC m=+156.940477458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.962849 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jw4gs" podStartSLOduration=134.962817162 podStartE2EDuration="2m14.962817162s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.949138162 +0000 UTC m=+156.465497235" watchObservedRunningTime="2026-02-17 15:56:32.962817162 +0000 UTC m=+156.479176235" Feb 17 15:56:32 crc kubenswrapper[4808]: I0217 15:56:32.995466 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vsl5p" podStartSLOduration=135.995435385 podStartE2EDuration="2m15.995435385s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:32.977024786 +0000 UTC m=+156.493383879" watchObservedRunningTime="2026-02-17 15:56:32.995435385 +0000 UTC m=+156.511794458" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.034034 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.034592 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.534553293 +0000 UTC m=+157.050912366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.084052 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" podStartSLOduration=135.084026651 podStartE2EDuration="2m15.084026651s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:33.032679482 +0000 UTC m=+156.549038575" watchObservedRunningTime="2026-02-17 15:56:33.084026651 +0000 UTC m=+156.600385724" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.130901 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" podStartSLOduration=136.130877737 podStartE2EDuration="2m16.130877737s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:33.086781525 +0000 UTC m=+156.603140598" watchObservedRunningTime="2026-02-17 15:56:33.130877737 +0000 UTC m=+156.647236810" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.136466 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.136944 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.636925741 +0000 UTC m=+157.153284814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.194370 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-z4qfh" podStartSLOduration=8.194347594 podStartE2EDuration="8.194347594s" podCreationTimestamp="2026-02-17 15:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:33.191022535 +0000 UTC m=+156.707381608" watchObservedRunningTime="2026-02-17 15:56:33.194347594 +0000 UTC m=+156.710706687" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.242760 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.243057 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.743039601 +0000 UTC m=+157.259398674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.345545 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.345968 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.845953435 +0000 UTC m=+157.362312508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.447295 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.447486 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.94746297 +0000 UTC m=+157.463822043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.447970 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.448372 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:33.948359125 +0000 UTC m=+157.464718198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.541322 4808 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-j6dgq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.541871 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" podUID="33978535-84b2-4def-af5a-d2819171e202" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.550089 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.550327 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.050311052 +0000 UTC m=+157.566670125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.631708 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mxgf8" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.652245 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.652675 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.152661121 +0000 UTC m=+157.669020194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.717684 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:56:33 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Feb 17 15:56:33 crc kubenswrapper[4808]: [+]process-running ok Feb 17 15:56:33 crc kubenswrapper[4808]: healthz check failed Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.717748 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.754221 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.754724 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.25470135 +0000 UTC m=+157.771060423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.788600 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" event={"ID":"14c6770e-9659-4e77-a7f1-f3ef06ec332d","Type":"ContainerStarted","Data":"2470da7936a29f3f56730e7168918a901e1d6d72c1ad9da5572d1943312ac952"} Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.790897 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" event={"ID":"69e8c398-683b-47dc-a517-633d625cbd97","Type":"ContainerStarted","Data":"5b040f8b829760acc053068dc69cdb50a3a6fb21d82b5d5b1a076a6fc10e2d28"} Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.794560 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" event={"ID":"728793ed-1e89-455c-8d45-92c4ab08c1f6","Type":"ContainerStarted","Data":"1515b2c38d6c463cdf7029191fa4639f05e318748ff6cbc7fa4190670301824e"} Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.801041 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-p8js4" event={"ID":"4b736927-813a-4b21-80d6-a0b4106e2c95","Type":"ContainerStarted","Data":"cb206168ab129d006ad7d5f6d31c6572e07b746c93ed7110887c23e590e6dff2"} Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.807247 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" event={"ID":"98bde021-9860-4b02-9223-512db6787eff","Type":"ContainerStarted","Data":"62ab951b66683ebc98e2343b94934e9ee53c8fd1fe8a6fdfd37370d4c9bcaf75"} Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.807444 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.814038 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" event={"ID":"4f9ab75e-8898-4a0c-8630-c657450b648e","Type":"ContainerStarted","Data":"f95a1b99d065c0511cc8e26a1c74ada25d15226411a1c0db49831c8c1b94a36e"} Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.819523 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x2jlg" event={"ID":"683fb061-dc67-431d-8a8a-d5a383794fef","Type":"ContainerStarted","Data":"6d7b02d0e6d15d7663f2b440e1a47856e12a46f8aac060e4ba78b162a63bd943"} Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.819587 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.824607 4808 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sbr84 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.824676 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.839991 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pd6wv" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.843291 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.847779 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-z82w8" podStartSLOduration=135.847756757 podStartE2EDuration="2m15.847756757s" podCreationTimestamp="2026-02-17 15:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:33.843029429 +0000 UTC m=+157.359388502" watchObservedRunningTime="2026-02-17 15:56:33.847756757 +0000 UTC m=+157.364115830" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.856270 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.863025 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.36300073 +0000 UTC m=+157.879360003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.949319 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-x2jlg" podStartSLOduration=8.949299393 podStartE2EDuration="8.949299393s" podCreationTimestamp="2026-02-17 15:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:33.948955834 +0000 UTC m=+157.465314907" watchObservedRunningTime="2026-02-17 15:56:33.949299393 +0000 UTC m=+157.465658456" Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.969347 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.969664 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.469626084 +0000 UTC m=+157.985985167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:33 crc kubenswrapper[4808]: I0217 15:56:33.970236 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:33 crc kubenswrapper[4808]: E0217 15:56:33.985325 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.485282557 +0000 UTC m=+158.001641630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.058189 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-n5p8z" podStartSLOduration=137.058166598 podStartE2EDuration="2m17.058166598s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:34.044263812 +0000 UTC m=+157.560622885" watchObservedRunningTime="2026-02-17 15:56:34.058166598 +0000 UTC m=+157.574525681" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.071833 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.072186 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.572164316 +0000 UTC m=+158.088523389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.174814 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.175294 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.675280636 +0000 UTC m=+158.191639709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.201195 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" podStartSLOduration=137.201176906 podStartE2EDuration="2m17.201176906s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:34.19908644 +0000 UTC m=+157.715445513" watchObservedRunningTime="2026-02-17 15:56:34.201176906 +0000 UTC m=+157.717535979" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.251019 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-22x8m"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.252022 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.264400 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.276711 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.276990 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-utilities\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.277044 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-catalog-content\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.277080 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h922n\" (UniqueName: \"kubernetes.io/projected/543b2019-8399-411e-8e8b-45787b96873f-kube-api-access-h922n\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.277165 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.77713342 +0000 UTC m=+158.293492493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.341373 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22x8m"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.381010 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-utilities\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.381094 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-catalog-content\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.381165 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.381190 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h922n\" (UniqueName: \"kubernetes.io/projected/543b2019-8399-411e-8e8b-45787b96873f-kube-api-access-h922n\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.381665 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-utilities\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.381876 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.881850053 +0000 UTC m=+158.398209126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.381892 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-catalog-content\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.432891 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hn7fn"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.434286 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.442047 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.450428 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h922n\" (UniqueName: \"kubernetes.io/projected/543b2019-8399-411e-8e8b-45787b96873f-kube-api-access-h922n\") pod \"community-operators-22x8m\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.470494 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hn7fn"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.482005 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.482165 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.982139056 +0000 UTC m=+158.498498129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.482217 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.482419 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp46n\" (UniqueName: \"kubernetes.io/projected/a1db3ff7-c43f-412e-ab72-3d592b6352b0-kube-api-access-sp46n\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.482559 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:34.982543226 +0000 UTC m=+158.498902299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.482597 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-utilities\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.482667 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-catalog-content\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.583592 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.583743 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.083708363 +0000 UTC m=+158.600067436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.583904 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp46n\" (UniqueName: \"kubernetes.io/projected/a1db3ff7-c43f-412e-ab72-3d592b6352b0-kube-api-access-sp46n\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.583961 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-utilities\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.583992 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-catalog-content\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.584028 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.584419 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.084402691 +0000 UTC m=+158.600761764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.584596 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-utilities\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.584669 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-catalog-content\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.595958 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.623270 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6vvmq"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.641802 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.652300 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6vvmq"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.656431 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp46n\" (UniqueName: \"kubernetes.io/projected/a1db3ff7-c43f-412e-ab72-3d592b6352b0-kube-api-access-sp46n\") pod \"certified-operators-hn7fn\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.678612 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bmq9l" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.686142 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.686521 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-catalog-content\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.686553 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-utilities\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.686617 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzvbx\" (UniqueName: \"kubernetes.io/projected/57300b85-6c7e-49da-bb14-40055f48a85c-kube-api-access-pzvbx\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.686807 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.186787071 +0000 UTC m=+158.703146144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.722755 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:56:34 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Feb 17 15:56:34 crc kubenswrapper[4808]: [+]process-running ok Feb 17 15:56:34 crc kubenswrapper[4808]: healthz check failed Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.722819 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.787042 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.788176 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.788239 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-catalog-content\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.788262 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-utilities\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.788301 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzvbx\" (UniqueName: \"kubernetes.io/projected/57300b85-6c7e-49da-bb14-40055f48a85c-kube-api-access-pzvbx\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.788984 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-catalog-content\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.789256 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-utilities\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.789672 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.289658532 +0000 UTC m=+158.806017605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.840334 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wsbjl"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.841396 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.854303 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wsbjl"] Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.861238 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzvbx\" (UniqueName: \"kubernetes.io/projected/57300b85-6c7e-49da-bb14-40055f48a85c-kube-api-access-pzvbx\") pod \"community-operators-6vvmq\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.864851 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" event={"ID":"69e8c398-683b-47dc-a517-633d625cbd97","Type":"ContainerStarted","Data":"815d41d195a9858305817e3cb2e19c39ddeead1311aafdc5105711ad98beaada"} Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.864905 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" event={"ID":"69e8c398-683b-47dc-a517-633d625cbd97","Type":"ContainerStarted","Data":"6ec7a39da2b5d4550f24f7e026c6d83f4682118c7b304a2c82fa1e54b603f474"} Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.889017 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.889312 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-catalog-content\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.889551 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4v4z\" (UniqueName: \"kubernetes.io/projected/2f04008a-114c-4f19-971a-34fa574846f5-kube-api-access-z4v4z\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:34 crc kubenswrapper[4808]: I0217 15:56:34.889612 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-utilities\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:34 crc kubenswrapper[4808]: E0217 15:56:34.930980 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.430943374 +0000 UTC m=+158.947302447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.007623 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.008675 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-catalog-content\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.009852 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.009947 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4v4z\" (UniqueName: \"kubernetes.io/projected/2f04008a-114c-4f19-971a-34fa574846f5-kube-api-access-z4v4z\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.010018 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-utilities\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.010335 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-utilities\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.010431 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-catalog-content\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.010706 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.510692921 +0000 UTC m=+159.027051994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.067013 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4v4z\" (UniqueName: \"kubernetes.io/projected/2f04008a-114c-4f19-971a-34fa574846f5-kube-api-access-z4v4z\") pod \"certified-operators-wsbjl\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.111858 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.112279 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.612255798 +0000 UTC m=+159.128614871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.184728 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.216671 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.217110 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.717094594 +0000 UTC m=+159.233453667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.318274 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.318408 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.818381733 +0000 UTC m=+159.334740806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.318691 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.319072 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.819063791 +0000 UTC m=+159.335422864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.364618 4808 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.396310 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22x8m"] Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.420218 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.420723 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:35.92067709 +0000 UTC m=+159.437036163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.526398 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.527237 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.027223332 +0000 UTC m=+159.543582395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.630212 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.630598 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.130559687 +0000 UTC m=+159.646918760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.724214 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:56:35 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Feb 17 15:56:35 crc kubenswrapper[4808]: [+]process-running ok Feb 17 15:56:35 crc kubenswrapper[4808]: healthz check failed Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.724259 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.732813 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.733145 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.233133541 +0000 UTC m=+159.749492614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.834497 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.834661 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.334626565 +0000 UTC m=+159.850985628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.835402 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.835876 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.335860469 +0000 UTC m=+159.852219542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.914711 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hn7fn"] Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.929043 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" event={"ID":"69e8c398-683b-47dc-a517-633d625cbd97","Type":"ContainerStarted","Data":"7aa9eff9e442f60586b42eaff2de3d9580aae6c64dad1bbdef28119c4acd70c1"} Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.939124 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:35 crc kubenswrapper[4808]: E0217 15:56:35.939544 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.439524423 +0000 UTC m=+159.955883486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.977441 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22x8m" event={"ID":"543b2019-8399-411e-8e8b-45787b96873f","Type":"ContainerStarted","Data":"a1b466a7276199cdb3d16661c145bd9226ea4df1371372728f98eec1641d1432"} Feb 17 15:56:35 crc kubenswrapper[4808]: I0217 15:56:35.977486 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22x8m" event={"ID":"543b2019-8399-411e-8e8b-45787b96873f","Type":"ContainerStarted","Data":"88ab9dc080b2cadb5ff2951ac6094d56029248c1c148ac36b7e2a6167225bf7c"} Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.044999 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:36 crc kubenswrapper[4808]: E0217 15:56:36.046331 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.546316092 +0000 UTC m=+160.062675165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.058621 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dxj7b" podStartSLOduration=11.058597303 podStartE2EDuration="11.058597303s" podCreationTimestamp="2026-02-17 15:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:36.023561576 +0000 UTC m=+159.539920649" watchObservedRunningTime="2026-02-17 15:56:36.058597303 +0000 UTC m=+159.574956396" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.060361 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6vvmq"] Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.098951 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wsbjl"] Feb 17 15:56:36 crc kubenswrapper[4808]: W0217 15:56:36.122782 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f04008a_114c_4f19_971a_34fa574846f5.slice/crio-735c6effafb73a77d28e55e021aec1242fb9a889fb9fde23203faa6b85d31dbc WatchSource:0}: Error finding container 735c6effafb73a77d28e55e021aec1242fb9a889fb9fde23203faa6b85d31dbc: Status 404 returned error can't find the container with id 735c6effafb73a77d28e55e021aec1242fb9a889fb9fde23203faa6b85d31dbc Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.146466 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:36 crc kubenswrapper[4808]: E0217 15:56:36.147331 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.647309333 +0000 UTC m=+160.163668406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.208715 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cs597"] Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.213085 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.215865 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.244364 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs597"] Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.250550 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.250693 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-catalog-content\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.250797 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-utilities\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.250892 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/48efd125-e3aa-444d-91a3-fa915be48b46-kube-api-access-ptbxm\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: E0217 15:56:36.252202 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-17 15:56:36.752188279 +0000 UTC m=+160.268547352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fmfh5" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.321905 4808 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-17T15:56:35.364647235Z","Handler":null,"Name":""} Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.326869 4808 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.326924 4808 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.353314 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.353533 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/48efd125-e3aa-444d-91a3-fa915be48b46-kube-api-access-ptbxm\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.353629 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-catalog-content\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.353668 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-utilities\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.355812 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-utilities\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.355833 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-catalog-content\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.404691 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.406597 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/48efd125-e3aa-444d-91a3-fa915be48b46-kube-api-access-ptbxm\") pod \"redhat-marketplace-cs597\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.454497 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.462946 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.463003 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.556969 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fmfh5\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.566747 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.602449 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ts9gs"] Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.603621 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.632207 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts9gs"] Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.641955 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.657870 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-catalog-content\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.657937 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbjtv\" (UniqueName: \"kubernetes.io/projected/92dfded8-f453-4bfc-809e-e7ed7e25de27-kube-api-access-kbjtv\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.657969 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-utilities\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.718881 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:56:36 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Feb 17 15:56:36 crc kubenswrapper[4808]: [+]process-running ok Feb 17 15:56:36 crc kubenswrapper[4808]: healthz check failed Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.718974 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.757786 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.759881 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-catalog-content\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.759921 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbjtv\" (UniqueName: \"kubernetes.io/projected/92dfded8-f453-4bfc-809e-e7ed7e25de27-kube-api-access-kbjtv\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.759956 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-utilities\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.760433 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-utilities\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.760667 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-catalog-content\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.777899 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7jp8q" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.784639 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbjtv\" (UniqueName: \"kubernetes.io/projected/92dfded8-f453-4bfc-809e-e7ed7e25de27-kube-api-access-kbjtv\") pod \"redhat-marketplace-ts9gs\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.919603 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:56:36 crc kubenswrapper[4808]: I0217 15:56:36.953337 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs597"] Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.004294 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fmfh5"] Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.012006 4808 generic.go:334] "Generic (PLEG): container finished" podID="2f04008a-114c-4f19-971a-34fa574846f5" containerID="f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5" exitCode=0 Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.012095 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsbjl" event={"ID":"2f04008a-114c-4f19-971a-34fa574846f5","Type":"ContainerDied","Data":"f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.012137 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsbjl" event={"ID":"2f04008a-114c-4f19-971a-34fa574846f5","Type":"ContainerStarted","Data":"735c6effafb73a77d28e55e021aec1242fb9a889fb9fde23203faa6b85d31dbc"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.026280 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.047078 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs597" event={"ID":"48efd125-e3aa-444d-91a3-fa915be48b46","Type":"ContainerStarted","Data":"126635f0be61976c959568021a2dceebba5ec8a4421ba4bd848eb5998d5c720b"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.069257 4808 generic.go:334] "Generic (PLEG): container finished" podID="543b2019-8399-411e-8e8b-45787b96873f" containerID="a1b466a7276199cdb3d16661c145bd9226ea4df1371372728f98eec1641d1432" exitCode=0 Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.069353 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22x8m" event={"ID":"543b2019-8399-411e-8e8b-45787b96873f","Type":"ContainerDied","Data":"a1b466a7276199cdb3d16661c145bd9226ea4df1371372728f98eec1641d1432"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.077302 4808 generic.go:334] "Generic (PLEG): container finished" podID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerID="b039d42ff08392f60bfd69fd494b2249c19f74796e443b4b4b8b827c93e49b48" exitCode=0 Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.077390 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hn7fn" event={"ID":"a1db3ff7-c43f-412e-ab72-3d592b6352b0","Type":"ContainerDied","Data":"b039d42ff08392f60bfd69fd494b2249c19f74796e443b4b4b8b827c93e49b48"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.077414 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hn7fn" event={"ID":"a1db3ff7-c43f-412e-ab72-3d592b6352b0","Type":"ContainerStarted","Data":"a45a3dcf61a1bf78b3c958287ad11993acb14303ea923a5033d56896c26a6ab3"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.101974 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vvmq" event={"ID":"57300b85-6c7e-49da-bb14-40055f48a85c","Type":"ContainerDied","Data":"a0e2eeefc3bf87bde55affaedf8d295a474fecb9dcf906520b5bc6b26957f78c"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.104175 4808 generic.go:334] "Generic (PLEG): container finished" podID="57300b85-6c7e-49da-bb14-40055f48a85c" containerID="a0e2eeefc3bf87bde55affaedf8d295a474fecb9dcf906520b5bc6b26957f78c" exitCode=0 Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.105042 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vvmq" event={"ID":"57300b85-6c7e-49da-bb14-40055f48a85c","Type":"ContainerStarted","Data":"978f619d6b3d5011491c32f00a6237544c3cbc039e50f7389d14d76374df3c9e"} Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.167147 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.287470 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts9gs"] Feb 17 15:56:37 crc kubenswrapper[4808]: W0217 15:56:37.342898 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92dfded8_f453_4bfc_809e_e7ed7e25de27.slice/crio-f4563d14e850e83b34a7ac316296bd63282dec1b6828a89346f08302aa89387a WatchSource:0}: Error finding container f4563d14e850e83b34a7ac316296bd63282dec1b6828a89346f08302aa89387a: Status 404 returned error can't find the container with id f4563d14e850e83b34a7ac316296bd63282dec1b6828a89346f08302aa89387a Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.372806 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.373749 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.390059 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.390146 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.390844 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.462696 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s2fz5" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.471081 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92637ea3-788c-438d-a664-c2b8d640f2d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.471161 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92637ea3-788c-438d-a664-c2b8d640f2d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.572873 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92637ea3-788c-438d-a664-c2b8d640f2d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.573012 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92637ea3-788c-438d-a664-c2b8d640f2d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.573029 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92637ea3-788c-438d-a664-c2b8d640f2d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.608614 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92637ea3-788c-438d-a664-c2b8d640f2d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.611679 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8jsrz"] Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.612909 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.616336 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.675146 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfwdc\" (UniqueName: \"kubernetes.io/projected/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-kube-api-access-bfwdc\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.675378 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-catalog-content\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.675558 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-utilities\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.700284 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8jsrz"] Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.725203 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:56:37 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Feb 17 15:56:37 crc kubenswrapper[4808]: [+]process-running ok Feb 17 15:56:37 crc kubenswrapper[4808]: healthz check failed Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.725286 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.776657 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-catalog-content\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.776741 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-utilities\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.776782 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfwdc\" (UniqueName: \"kubernetes.io/projected/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-kube-api-access-bfwdc\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.778251 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-utilities\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.778280 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-catalog-content\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.780851 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.796900 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfwdc\" (UniqueName: \"kubernetes.io/projected/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-kube-api-access-bfwdc\") pod \"redhat-operators-8jsrz\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.912630 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-wlj8d container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.912696 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wlj8d" podUID="116ae5bc-cf7e-45ad-9800-501bcfc04ff7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.912977 4808 patch_prober.go:28] interesting pod/downloads-7954f5f757-wlj8d container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.913053 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wlj8d" podUID="116ae5bc-cf7e-45ad-9800-501bcfc04ff7" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 17 15:56:37 crc kubenswrapper[4808]: I0217 15:56:37.934915 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.001101 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.018673 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qhtfr"] Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.019719 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.044268 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhtfr"] Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.082507 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-utilities\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.082565 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-catalog-content\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.082672 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2255r\" (UniqueName: \"kubernetes.io/projected/df27437e-6547-4705-bbe7-08a726639dbe-kube-api-access-2255r\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.138022 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92637ea3-788c-438d-a664-c2b8d640f2d1","Type":"ContainerStarted","Data":"8d3e6325dff416527f0b5f7a426deb2ee9273e60e45b536362885c914658d019"} Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.151501 4808 generic.go:334] "Generic (PLEG): container finished" podID="48efd125-e3aa-444d-91a3-fa915be48b46" containerID="2d27bebccfda20ebcc5c228a8194fccc9e95ec81e20baedc530a917fdd03e867" exitCode=0 Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.151728 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs597" event={"ID":"48efd125-e3aa-444d-91a3-fa915be48b46","Type":"ContainerDied","Data":"2d27bebccfda20ebcc5c228a8194fccc9e95ec81e20baedc530a917fdd03e867"} Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.161848 4808 generic.go:334] "Generic (PLEG): container finished" podID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerID="9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2" exitCode=0 Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.161932 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts9gs" event={"ID":"92dfded8-f453-4bfc-809e-e7ed7e25de27","Type":"ContainerDied","Data":"9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2"} Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.161991 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts9gs" event={"ID":"92dfded8-f453-4bfc-809e-e7ed7e25de27","Type":"ContainerStarted","Data":"f4563d14e850e83b34a7ac316296bd63282dec1b6828a89346f08302aa89387a"} Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.185557 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-utilities\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.185661 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-catalog-content\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.185761 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2255r\" (UniqueName: \"kubernetes.io/projected/df27437e-6547-4705-bbe7-08a726639dbe-kube-api-access-2255r\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.188507 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-catalog-content\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.188610 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-utilities\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.193538 4808 generic.go:334] "Generic (PLEG): container finished" podID="7baa3ebb-6bb0-4744-b096-971958bcd263" containerID="4636e3a05a4f1b63b0a37839e73e790b55d96dd321273848e2dfb3f38193ea44" exitCode=0 Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.193966 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" event={"ID":"7baa3ebb-6bb0-4744-b096-971958bcd263","Type":"ContainerDied","Data":"4636e3a05a4f1b63b0a37839e73e790b55d96dd321273848e2dfb3f38193ea44"} Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.209877 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" event={"ID":"ddc3801d-3513-460c-a719-ed9dc92697e7","Type":"ContainerStarted","Data":"2c6abeefd28c47d49cee179f808d4b10aff7311be498ba875ef344c21dc775da"} Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.209930 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" event={"ID":"ddc3801d-3513-460c-a719-ed9dc92697e7","Type":"ContainerStarted","Data":"6e3f1081b00b18d9f343d94a49f4eb8fd3475f6dc82e8e6676483c99ff105dda"} Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.210606 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.218387 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2255r\" (UniqueName: \"kubernetes.io/projected/df27437e-6547-4705-bbe7-08a726639dbe-kube-api-access-2255r\") pod \"redhat-operators-qhtfr\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.219499 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.219536 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.245067 4808 patch_prober.go:28] interesting pod/console-f9d7485db-hdg74 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.245131 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-hdg74" podUID="e489a46b-9123-44c6-94e0-692621760dd6" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.262739 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" podStartSLOduration=141.262714588 podStartE2EDuration="2m21.262714588s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:38.255064441 +0000 UTC m=+161.771423514" watchObservedRunningTime="2026-02-17 15:56:38.262714588 +0000 UTC m=+161.779073661" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.272310 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8jsrz"] Feb 17 15:56:38 crc kubenswrapper[4808]: W0217 15:56:38.332961 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode22d34a8_92f6_4a2a_a0f5_e063c25afac1.slice/crio-74a889b6efdb919b84134965ae425faf36a72c4e4787bd3f59cfb8cf73e5c6b2 WatchSource:0}: Error finding container 74a889b6efdb919b84134965ae425faf36a72c4e4787bd3f59cfb8cf73e5c6b2: Status 404 returned error can't find the container with id 74a889b6efdb919b84134965ae425faf36a72c4e4787bd3f59cfb8cf73e5c6b2 Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.396269 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.712421 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.717122 4808 patch_prober.go:28] interesting pod/router-default-5444994796-jwcd2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 17 15:56:38 crc kubenswrapper[4808]: [-]has-synced failed: reason withheld Feb 17 15:56:38 crc kubenswrapper[4808]: [+]process-running ok Feb 17 15:56:38 crc kubenswrapper[4808]: healthz check failed Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.717179 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-jwcd2" podUID="b26b861c-ec52-4685-846c-ea022517e9fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.725009 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:56:38 crc kubenswrapper[4808]: I0217 15:56:38.927704 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhtfr"] Feb 17 15:56:38 crc kubenswrapper[4808]: W0217 15:56:38.945451 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf27437e_6547_4705_bbe7_08a726639dbe.slice/crio-1e19955de905028b28d439d0244d4c394edca2e38947d73637092653f1783480 WatchSource:0}: Error finding container 1e19955de905028b28d439d0244d4c394edca2e38947d73637092653f1783480: Status 404 returned error can't find the container with id 1e19955de905028b28d439d0244d4c394edca2e38947d73637092653f1783480 Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.225214 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92637ea3-788c-438d-a664-c2b8d640f2d1","Type":"ContainerStarted","Data":"870afdbdf8bbaf38a8a882e84c4b0e9c69042050dd1e130951409c7fee498caf"} Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.228379 4808 generic.go:334] "Generic (PLEG): container finished" podID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerID="3c46a03c8aecba377b0d1ea2fda18a067c3dd9d9e53d4229b5338fca0d7a98e0" exitCode=0 Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.228477 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jsrz" event={"ID":"e22d34a8-92f6-4a2a-a0f5-e063c25afac1","Type":"ContainerDied","Data":"3c46a03c8aecba377b0d1ea2fda18a067c3dd9d9e53d4229b5338fca0d7a98e0"} Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.228504 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jsrz" event={"ID":"e22d34a8-92f6-4a2a-a0f5-e063c25afac1","Type":"ContainerStarted","Data":"74a889b6efdb919b84134965ae425faf36a72c4e4787bd3f59cfb8cf73e5c6b2"} Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.243496 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.243473535 podStartE2EDuration="2.243473535s" podCreationTimestamp="2026-02-17 15:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:39.237621367 +0000 UTC m=+162.753980440" watchObservedRunningTime="2026-02-17 15:56:39.243473535 +0000 UTC m=+162.759832608" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.281037 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhtfr" event={"ID":"df27437e-6547-4705-bbe7-08a726639dbe","Type":"ContainerStarted","Data":"1e19955de905028b28d439d0244d4c394edca2e38947d73637092653f1783480"} Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.642747 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.722497 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.728900 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-jwcd2" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.816562 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7baa3ebb-6bb0-4744-b096-971958bcd263-secret-volume\") pod \"7baa3ebb-6bb0-4744-b096-971958bcd263\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.816640 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7baa3ebb-6bb0-4744-b096-971958bcd263-config-volume\") pod \"7baa3ebb-6bb0-4744-b096-971958bcd263\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.817513 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmv2c\" (UniqueName: \"kubernetes.io/projected/7baa3ebb-6bb0-4744-b096-971958bcd263-kube-api-access-gmv2c\") pod \"7baa3ebb-6bb0-4744-b096-971958bcd263\" (UID: \"7baa3ebb-6bb0-4744-b096-971958bcd263\") " Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.817820 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.818802 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7baa3ebb-6bb0-4744-b096-971958bcd263-config-volume" (OuterVolumeSpecName: "config-volume") pod "7baa3ebb-6bb0-4744-b096-971958bcd263" (UID: "7baa3ebb-6bb0-4744-b096-971958bcd263"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.823486 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7baa3ebb-6bb0-4744-b096-971958bcd263-kube-api-access-gmv2c" (OuterVolumeSpecName: "kube-api-access-gmv2c") pod "7baa3ebb-6bb0-4744-b096-971958bcd263" (UID: "7baa3ebb-6bb0-4744-b096-971958bcd263"). InnerVolumeSpecName "kube-api-access-gmv2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.827227 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b88c3e5f-7390-477c-ae74-aced26a8ddf9-metrics-certs\") pod \"network-metrics-daemon-z8tn8\" (UID: \"b88c3e5f-7390-477c-ae74-aced26a8ddf9\") " pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.842582 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7baa3ebb-6bb0-4744-b096-971958bcd263-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7baa3ebb-6bb0-4744-b096-971958bcd263" (UID: "7baa3ebb-6bb0-4744-b096-971958bcd263"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.919079 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmv2c\" (UniqueName: \"kubernetes.io/projected/7baa3ebb-6bb0-4744-b096-971958bcd263-kube-api-access-gmv2c\") on node \"crc\" DevicePath \"\"" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.919123 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7baa3ebb-6bb0-4744-b096-971958bcd263-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 15:56:39 crc kubenswrapper[4808]: I0217 15:56:39.919134 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7baa3ebb-6bb0-4744-b096-971958bcd263-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.072498 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z8tn8" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.228524 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 15:56:40 crc kubenswrapper[4808]: E0217 15:56:40.228944 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7baa3ebb-6bb0-4744-b096-971958bcd263" containerName="collect-profiles" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.228980 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7baa3ebb-6bb0-4744-b096-971958bcd263" containerName="collect-profiles" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.229154 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7baa3ebb-6bb0-4744-b096-971958bcd263" containerName="collect-profiles" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.229939 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.232516 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.239155 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.275008 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.327685 4808 generic.go:334] "Generic (PLEG): container finished" podID="92637ea3-788c-438d-a664-c2b8d640f2d1" containerID="870afdbdf8bbaf38a8a882e84c4b0e9c69042050dd1e130951409c7fee498caf" exitCode=0 Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.328518 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92637ea3-788c-438d-a664-c2b8d640f2d1","Type":"ContainerDied","Data":"870afdbdf8bbaf38a8a882e84c4b0e9c69042050dd1e130951409c7fee498caf"} Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.338081 4808 generic.go:334] "Generic (PLEG): container finished" podID="df27437e-6547-4705-bbe7-08a726639dbe" containerID="7be6898f1f88ea761e64c2d8022df14c7db8627e97d2f080f379df7514b92a85" exitCode=0 Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.338204 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhtfr" event={"ID":"df27437e-6547-4705-bbe7-08a726639dbe","Type":"ContainerDied","Data":"7be6898f1f88ea761e64c2d8022df14c7db8627e97d2f080f379df7514b92a85"} Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.348488 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" event={"ID":"7baa3ebb-6bb0-4744-b096-971958bcd263","Type":"ContainerDied","Data":"b07a627c0e44e85d03382e77fdbb6e3a6fef1ba1b49d24c7a30b720a10a8ce6d"} Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.348562 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b07a627c0e44e85d03382e77fdbb6e3a6fef1ba1b49d24c7a30b720a10a8ce6d" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.348515 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.438121 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.438178 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.520848 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-z8tn8"] Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.539854 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.540028 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.539903 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:40 crc kubenswrapper[4808]: W0217 15:56:40.545162 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb88c3e5f_7390_477c_ae74_aced26a8ddf9.slice/crio-1b698169075bf038e5184d91d7401cd9a1728c0dfa40c4b12efb0fd20af6ad51 WatchSource:0}: Error finding container 1b698169075bf038e5184d91d7401cd9a1728c0dfa40c4b12efb0fd20af6ad51: Status 404 returned error can't find the container with id 1b698169075bf038e5184d91d7401cd9a1728c0dfa40c4b12efb0fd20af6ad51 Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.560461 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:40 crc kubenswrapper[4808]: I0217 15:56:40.852143 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.297462 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 17 15:56:41 crc kubenswrapper[4808]: W0217 15:56:41.307348 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7c3eff00_0ae7_4c6a_ad5f_931c2cf09940.slice/crio-641c09b4d5872f3c3e8ee8e03d3848dfb882c5e36b3e9f317878d25816f52685 WatchSource:0}: Error finding container 641c09b4d5872f3c3e8ee8e03d3848dfb882c5e36b3e9f317878d25816f52685: Status 404 returned error can't find the container with id 641c09b4d5872f3c3e8ee8e03d3848dfb882c5e36b3e9f317878d25816f52685 Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.369247 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940","Type":"ContainerStarted","Data":"641c09b4d5872f3c3e8ee8e03d3848dfb882c5e36b3e9f317878d25816f52685"} Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.373421 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" event={"ID":"b88c3e5f-7390-477c-ae74-aced26a8ddf9","Type":"ContainerStarted","Data":"a8179ccd7a37be51ec49686db81a755d6740e78a2ba8586d22c71af160ecf913"} Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.373453 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" event={"ID":"b88c3e5f-7390-477c-ae74-aced26a8ddf9","Type":"ContainerStarted","Data":"1b698169075bf038e5184d91d7401cd9a1728c0dfa40c4b12efb0fd20af6ad51"} Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.811107 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.967657 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92637ea3-788c-438d-a664-c2b8d640f2d1-kube-api-access\") pod \"92637ea3-788c-438d-a664-c2b8d640f2d1\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.967773 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92637ea3-788c-438d-a664-c2b8d640f2d1-kubelet-dir\") pod \"92637ea3-788c-438d-a664-c2b8d640f2d1\" (UID: \"92637ea3-788c-438d-a664-c2b8d640f2d1\") " Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.967870 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92637ea3-788c-438d-a664-c2b8d640f2d1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "92637ea3-788c-438d-a664-c2b8d640f2d1" (UID: "92637ea3-788c-438d-a664-c2b8d640f2d1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.969513 4808 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92637ea3-788c-438d-a664-c2b8d640f2d1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:56:41 crc kubenswrapper[4808]: I0217 15:56:41.976094 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92637ea3-788c-438d-a664-c2b8d640f2d1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "92637ea3-788c-438d-a664-c2b8d640f2d1" (UID: "92637ea3-788c-438d-a664-c2b8d640f2d1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:56:42 crc kubenswrapper[4808]: I0217 15:56:42.072371 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92637ea3-788c-438d-a664-c2b8d640f2d1-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:56:42 crc kubenswrapper[4808]: I0217 15:56:42.422804 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92637ea3-788c-438d-a664-c2b8d640f2d1","Type":"ContainerDied","Data":"8d3e6325dff416527f0b5f7a426deb2ee9273e60e45b536362885c914658d019"} Feb 17 15:56:42 crc kubenswrapper[4808]: I0217 15:56:42.422853 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d3e6325dff416527f0b5f7a426deb2ee9273e60e45b536362885c914658d019" Feb 17 15:56:42 crc kubenswrapper[4808]: I0217 15:56:42.422854 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 17 15:56:43 crc kubenswrapper[4808]: I0217 15:56:43.465088 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z8tn8" event={"ID":"b88c3e5f-7390-477c-ae74-aced26a8ddf9","Type":"ContainerStarted","Data":"06a957f888bb8269e1dbf81b7c6449a7e858c2480beeda1758a5795ebe02bd2f"} Feb 17 15:56:43 crc kubenswrapper[4808]: I0217 15:56:43.468080 4808 generic.go:334] "Generic (PLEG): container finished" podID="7c3eff00-0ae7-4c6a-ad5f-931c2cf09940" containerID="084dd9cf385adbcc2f2e5a2b91eb5e840e1a961c941e025bd32443d059e8b202" exitCode=0 Feb 17 15:56:43 crc kubenswrapper[4808]: I0217 15:56:43.468138 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940","Type":"ContainerDied","Data":"084dd9cf385adbcc2f2e5a2b91eb5e840e1a961c941e025bd32443d059e8b202"} Feb 17 15:56:43 crc kubenswrapper[4808]: I0217 15:56:43.485275 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-z8tn8" podStartSLOduration=146.485233801 podStartE2EDuration="2m26.485233801s" podCreationTimestamp="2026-02-17 15:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:56:43.483777812 +0000 UTC m=+167.000136935" watchObservedRunningTime="2026-02-17 15:56:43.485233801 +0000 UTC m=+167.001592874" Feb 17 15:56:43 crc kubenswrapper[4808]: I0217 15:56:43.812401 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-x2jlg" Feb 17 15:56:47 crc kubenswrapper[4808]: I0217 15:56:47.919865 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-wlj8d" Feb 17 15:56:48 crc kubenswrapper[4808]: I0217 15:56:48.224363 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:48 crc kubenswrapper[4808]: I0217 15:56:48.229382 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 15:56:50 crc kubenswrapper[4808]: I0217 15:56:50.837196 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:50 crc kubenswrapper[4808]: I0217 15:56:50.951470 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kube-api-access\") pod \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " Feb 17 15:56:50 crc kubenswrapper[4808]: I0217 15:56:50.951643 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kubelet-dir\") pod \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\" (UID: \"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940\") " Feb 17 15:56:50 crc kubenswrapper[4808]: I0217 15:56:50.951778 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7c3eff00-0ae7-4c6a-ad5f-931c2cf09940" (UID: "7c3eff00-0ae7-4c6a-ad5f-931c2cf09940"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:56:50 crc kubenswrapper[4808]: I0217 15:56:50.952254 4808 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:56:50 crc kubenswrapper[4808]: I0217 15:56:50.966874 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7c3eff00-0ae7-4c6a-ad5f-931c2cf09940" (UID: "7c3eff00-0ae7-4c6a-ad5f-931c2cf09940"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.054084 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c3eff00-0ae7-4c6a-ad5f-931c2cf09940-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.406344 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cvqck"] Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.406599 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" podUID="a7649915-6408-4c30-8faa-0fb3ea55007a" containerName="controller-manager" containerID="cri-o://fb57ffbad5715668e0b26cf285ebec4d01aad8ac4a4db782b62b453c180c8e47" gracePeriod=30 Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.424851 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5"] Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.425094 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" podUID="8227d3a9-60f5-4d19-b4d1-8a0143864837" containerName="route-controller-manager" containerID="cri-o://f98437fbbf139d63581f07e82442459bd2916424cb75fd60caf9d2b40747e184" gracePeriod=30 Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.591913 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.591980 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.592687 4808 generic.go:334] "Generic (PLEG): container finished" podID="8227d3a9-60f5-4d19-b4d1-8a0143864837" containerID="f98437fbbf139d63581f07e82442459bd2916424cb75fd60caf9d2b40747e184" exitCode=0 Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.592785 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" event={"ID":"8227d3a9-60f5-4d19-b4d1-8a0143864837","Type":"ContainerDied","Data":"f98437fbbf139d63581f07e82442459bd2916424cb75fd60caf9d2b40747e184"} Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.598132 4808 generic.go:334] "Generic (PLEG): container finished" podID="a7649915-6408-4c30-8faa-0fb3ea55007a" containerID="fb57ffbad5715668e0b26cf285ebec4d01aad8ac4a4db782b62b453c180c8e47" exitCode=0 Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.598216 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" event={"ID":"a7649915-6408-4c30-8faa-0fb3ea55007a","Type":"ContainerDied","Data":"fb57ffbad5715668e0b26cf285ebec4d01aad8ac4a4db782b62b453c180c8e47"} Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.605345 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7c3eff00-0ae7-4c6a-ad5f-931c2cf09940","Type":"ContainerDied","Data":"641c09b4d5872f3c3e8ee8e03d3848dfb882c5e36b3e9f317878d25816f52685"} Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.605392 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="641c09b4d5872f3c3e8ee8e03d3848dfb882c5e36b3e9f317878d25816f52685" Feb 17 15:56:51 crc kubenswrapper[4808]: I0217 15:56:51.605465 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 17 15:56:56 crc kubenswrapper[4808]: I0217 15:56:56.650841 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 15:56:58 crc kubenswrapper[4808]: I0217 15:56:58.126473 4808 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-j6vm5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 17 15:56:58 crc kubenswrapper[4808]: I0217 15:56:58.127172 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" podUID="8227d3a9-60f5-4d19-b4d1-8a0143864837" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 17 15:56:59 crc kubenswrapper[4808]: I0217 15:56:59.195536 4808 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cvqck container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 17 15:56:59 crc kubenswrapper[4808]: I0217 15:56:59.196192 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" podUID="a7649915-6408-4c30-8faa-0fb3ea55007a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 17 15:57:01 crc kubenswrapper[4808]: E0217 15:57:01.715203 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 17 15:57:01 crc kubenswrapper[4808]: E0217 15:57:01.716294 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sp46n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hn7fn_openshift-marketplace(a1db3ff7-c43f-412e-ab72-3d592b6352b0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 15:57:01 crc kubenswrapper[4808]: E0217 15:57:01.717645 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hn7fn" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" Feb 17 15:57:03 crc kubenswrapper[4808]: E0217 15:57:03.215094 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hn7fn" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.278821 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.313856 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn"] Feb 17 15:57:03 crc kubenswrapper[4808]: E0217 15:57:03.314230 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3eff00-0ae7-4c6a-ad5f-931c2cf09940" containerName="pruner" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.314248 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3eff00-0ae7-4c6a-ad5f-931c2cf09940" containerName="pruner" Feb 17 15:57:03 crc kubenswrapper[4808]: E0217 15:57:03.314259 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7649915-6408-4c30-8faa-0fb3ea55007a" containerName="controller-manager" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.314267 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7649915-6408-4c30-8faa-0fb3ea55007a" containerName="controller-manager" Feb 17 15:57:03 crc kubenswrapper[4808]: E0217 15:57:03.314284 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92637ea3-788c-438d-a664-c2b8d640f2d1" containerName="pruner" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.314293 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="92637ea3-788c-438d-a664-c2b8d640f2d1" containerName="pruner" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.314427 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7649915-6408-4c30-8faa-0fb3ea55007a" containerName="controller-manager" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.314441 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c3eff00-0ae7-4c6a-ad5f-931c2cf09940" containerName="pruner" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.314452 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="92637ea3-788c-438d-a664-c2b8d640f2d1" containerName="pruner" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.315100 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.317227 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn"] Feb 17 15:57:03 crc kubenswrapper[4808]: E0217 15:57:03.360486 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 17 15:57:03 crc kubenswrapper[4808]: E0217 15:57:03.360685 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h922n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-22x8m_openshift-marketplace(543b2019-8399-411e-8e8b-45787b96873f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 15:57:03 crc kubenswrapper[4808]: E0217 15:57:03.361936 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-22x8m" podUID="543b2019-8399-411e-8e8b-45787b96873f" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.450019 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-proxy-ca-bundles\") pod \"a7649915-6408-4c30-8faa-0fb3ea55007a\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.450116 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-client-ca\") pod \"a7649915-6408-4c30-8faa-0fb3ea55007a\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.450289 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-config\") pod \"a7649915-6408-4c30-8faa-0fb3ea55007a\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.450357 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7649915-6408-4c30-8faa-0fb3ea55007a-serving-cert\") pod \"a7649915-6408-4c30-8faa-0fb3ea55007a\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.450612 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8srf\" (UniqueName: \"kubernetes.io/projected/a7649915-6408-4c30-8faa-0fb3ea55007a-kube-api-access-v8srf\") pod \"a7649915-6408-4c30-8faa-0fb3ea55007a\" (UID: \"a7649915-6408-4c30-8faa-0fb3ea55007a\") " Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.451397 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-config" (OuterVolumeSpecName: "config") pod "a7649915-6408-4c30-8faa-0fb3ea55007a" (UID: "a7649915-6408-4c30-8faa-0fb3ea55007a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.451468 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-client-ca" (OuterVolumeSpecName: "client-ca") pod "a7649915-6408-4c30-8faa-0fb3ea55007a" (UID: "a7649915-6408-4c30-8faa-0fb3ea55007a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.451479 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a7649915-6408-4c30-8faa-0fb3ea55007a" (UID: "a7649915-6408-4c30-8faa-0fb3ea55007a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452197 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-config\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452240 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxsd5\" (UniqueName: \"kubernetes.io/projected/013c1a2d-19c5-47a3-ae05-f202eac66987-kube-api-access-lxsd5\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452425 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-client-ca\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452452 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-proxy-ca-bundles\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452502 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/013c1a2d-19c5-47a3-ae05-f202eac66987-serving-cert\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452630 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452646 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.452678 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7649915-6408-4c30-8faa-0fb3ea55007a-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.458772 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7649915-6408-4c30-8faa-0fb3ea55007a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a7649915-6408-4c30-8faa-0fb3ea55007a" (UID: "a7649915-6408-4c30-8faa-0fb3ea55007a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.461368 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7649915-6408-4c30-8faa-0fb3ea55007a-kube-api-access-v8srf" (OuterVolumeSpecName: "kube-api-access-v8srf") pod "a7649915-6408-4c30-8faa-0fb3ea55007a" (UID: "a7649915-6408-4c30-8faa-0fb3ea55007a"). InnerVolumeSpecName "kube-api-access-v8srf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.553671 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/013c1a2d-19c5-47a3-ae05-f202eac66987-serving-cert\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.553790 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-config\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.553820 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxsd5\" (UniqueName: \"kubernetes.io/projected/013c1a2d-19c5-47a3-ae05-f202eac66987-kube-api-access-lxsd5\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.553892 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-client-ca\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.553918 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-proxy-ca-bundles\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.553964 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8srf\" (UniqueName: \"kubernetes.io/projected/a7649915-6408-4c30-8faa-0fb3ea55007a-kube-api-access-v8srf\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.553980 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7649915-6408-4c30-8faa-0fb3ea55007a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.555744 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-proxy-ca-bundles\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.557729 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-config\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.558193 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-client-ca\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.560426 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/013c1a2d-19c5-47a3-ae05-f202eac66987-serving-cert\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.576199 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxsd5\" (UniqueName: \"kubernetes.io/projected/013c1a2d-19c5-47a3-ae05-f202eac66987-kube-api-access-lxsd5\") pod \"controller-manager-5bfbf6ffb-5h8qn\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.673067 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.751285 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.753125 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cvqck" event={"ID":"a7649915-6408-4c30-8faa-0fb3ea55007a","Type":"ContainerDied","Data":"82fbd205cacd70de3bd72105fabd5651b63f3ef10de2b4bbb91392f1254ffcb7"} Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.753270 4808 scope.go:117] "RemoveContainer" containerID="fb57ffbad5715668e0b26cf285ebec4d01aad8ac4a4db782b62b453c180c8e47" Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.797200 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cvqck"] Feb 17 15:57:03 crc kubenswrapper[4808]: I0217 15:57:03.801854 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cvqck"] Feb 17 15:57:05 crc kubenswrapper[4808]: I0217 15:57:05.161237 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7649915-6408-4c30-8faa-0fb3ea55007a" path="/var/lib/kubelet/pods/a7649915-6408-4c30-8faa-0fb3ea55007a/volumes" Feb 17 15:57:05 crc kubenswrapper[4808]: I0217 15:57:05.476101 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.430306 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-22x8m" podUID="543b2019-8399-411e-8e8b-45787b96873f" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.517347 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.567881 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.572645 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfwdc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-8jsrz_openshift-marketplace(e22d34a8-92f6-4a2a-a0f5-e063c25afac1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.572960 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd"] Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.573297 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8227d3a9-60f5-4d19-b4d1-8a0143864837" containerName="route-controller-manager" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.573312 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8227d3a9-60f5-4d19-b4d1-8a0143864837" containerName="route-controller-manager" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.573451 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8227d3a9-60f5-4d19-b4d1-8a0143864837" containerName="route-controller-manager" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.573959 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-8jsrz" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.574006 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.575570 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.575789 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptbxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cs597_openshift-marketplace(48efd125-e3aa-444d-91a3-fa915be48b46): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.577271 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cs597" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.580303 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd"] Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.593940 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.594296 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2255r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qhtfr_openshift-marketplace(df27437e-6547-4705-bbe7-08a726639dbe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.595716 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qhtfr" podUID="df27437e-6547-4705-bbe7-08a726639dbe" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.627153 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d3a9-60f5-4d19-b4d1-8a0143864837-serving-cert\") pod \"8227d3a9-60f5-4d19-b4d1-8a0143864837\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.627717 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-client-ca\") pod \"8227d3a9-60f5-4d19-b4d1-8a0143864837\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.627755 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-config\") pod \"8227d3a9-60f5-4d19-b4d1-8a0143864837\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.627785 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nx4t\" (UniqueName: \"kubernetes.io/projected/8227d3a9-60f5-4d19-b4d1-8a0143864837-kube-api-access-6nx4t\") pod \"8227d3a9-60f5-4d19-b4d1-8a0143864837\" (UID: \"8227d3a9-60f5-4d19-b4d1-8a0143864837\") " Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.628412 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-client-ca" (OuterVolumeSpecName: "client-ca") pod "8227d3a9-60f5-4d19-b4d1-8a0143864837" (UID: "8227d3a9-60f5-4d19-b4d1-8a0143864837"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.628987 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-config" (OuterVolumeSpecName: "config") pod "8227d3a9-60f5-4d19-b4d1-8a0143864837" (UID: "8227d3a9-60f5-4d19-b4d1-8a0143864837"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.634897 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8227d3a9-60f5-4d19-b4d1-8a0143864837-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8227d3a9-60f5-4d19-b4d1-8a0143864837" (UID: "8227d3a9-60f5-4d19-b4d1-8a0143864837"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.639416 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8227d3a9-60f5-4d19-b4d1-8a0143864837-kube-api-access-6nx4t" (OuterVolumeSpecName: "kube-api-access-6nx4t") pod "8227d3a9-60f5-4d19-b4d1-8a0143864837" (UID: "8227d3a9-60f5-4d19-b4d1-8a0143864837"). InnerVolumeSpecName "kube-api-access-6nx4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729362 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-client-ca\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729566 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lc69\" (UniqueName: \"kubernetes.io/projected/26b4e80a-42fe-4a5f-99f3-e9967587b72a-kube-api-access-7lc69\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729648 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b4e80a-42fe-4a5f-99f3-e9967587b72a-serving-cert\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729718 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-config\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729796 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729848 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nx4t\" (UniqueName: \"kubernetes.io/projected/8227d3a9-60f5-4d19-b4d1-8a0143864837-kube-api-access-6nx4t\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729864 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8227d3a9-60f5-4d19-b4d1-8a0143864837-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.729874 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8227d3a9-60f5-4d19-b4d1-8a0143864837-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.781368 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" event={"ID":"8227d3a9-60f5-4d19-b4d1-8a0143864837","Type":"ContainerDied","Data":"87a30c2a90c4016dabeb2fd3e6331db8b801e3a30d3bec36b1482acb813df460"} Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.781875 4808 scope.go:117] "RemoveContainer" containerID="f98437fbbf139d63581f07e82442459bd2916424cb75fd60caf9d2b40747e184" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.782012 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.795911 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vvmq" event={"ID":"57300b85-6c7e-49da-bb14-40055f48a85c","Type":"ContainerStarted","Data":"bbcda24c56c4da1bf611a909ec28352a94064de773428161e7634b8284dbcb93"} Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.809318 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn"] Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.830240 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsbjl" event={"ID":"2f04008a-114c-4f19-971a-34fa574846f5","Type":"ContainerStarted","Data":"b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c"} Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.831162 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-config\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.831248 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-client-ca\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.831287 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lc69\" (UniqueName: \"kubernetes.io/projected/26b4e80a-42fe-4a5f-99f3-e9967587b72a-kube-api-access-7lc69\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.831313 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b4e80a-42fe-4a5f-99f3-e9967587b72a-serving-cert\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.832988 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-client-ca\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.833192 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-config\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.838835 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b4e80a-42fe-4a5f-99f3-e9967587b72a-serving-cert\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.841516 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5"] Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.842821 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-8jsrz" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.846732 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-qhtfr" podUID="df27437e-6547-4705-bbe7-08a726639dbe" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.847704 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lc69\" (UniqueName: \"kubernetes.io/projected/26b4e80a-42fe-4a5f-99f3-e9967587b72a-kube-api-access-7lc69\") pod \"route-controller-manager-5797f68d88-nqrfd\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.853586 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-j6vm5"] Feb 17 15:57:07 crc kubenswrapper[4808]: E0217 15:57:07.856483 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cs597" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" Feb 17 15:57:07 crc kubenswrapper[4808]: I0217 15:57:07.895539 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.121957 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd"] Feb 17 15:57:08 crc kubenswrapper[4808]: W0217 15:57:08.178364 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26b4e80a_42fe_4a5f_99f3_e9967587b72a.slice/crio-d29a4ebc7c0c8249cf9b6afd154a8e3281a9da7692080a8b5c9a23df6d329cfe WatchSource:0}: Error finding container d29a4ebc7c0c8249cf9b6afd154a8e3281a9da7692080a8b5c9a23df6d329cfe: Status 404 returned error can't find the container with id d29a4ebc7c0c8249cf9b6afd154a8e3281a9da7692080a8b5c9a23df6d329cfe Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.771864 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-spzc7" Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.849565 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" event={"ID":"013c1a2d-19c5-47a3-ae05-f202eac66987","Type":"ContainerStarted","Data":"7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1"} Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.849635 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" event={"ID":"013c1a2d-19c5-47a3-ae05-f202eac66987","Type":"ContainerStarted","Data":"3db618564a6ec77c73d367392142f19b47c4dabc393708105b57bd64c94ec953"} Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.849980 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.851700 4808 generic.go:334] "Generic (PLEG): container finished" podID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerID="05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190" exitCode=0 Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.851725 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts9gs" event={"ID":"92dfded8-f453-4bfc-809e-e7ed7e25de27","Type":"ContainerDied","Data":"05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190"} Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.854844 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" event={"ID":"26b4e80a-42fe-4a5f-99f3-e9967587b72a","Type":"ContainerStarted","Data":"cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c"} Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.854872 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" event={"ID":"26b4e80a-42fe-4a5f-99f3-e9967587b72a","Type":"ContainerStarted","Data":"d29a4ebc7c0c8249cf9b6afd154a8e3281a9da7692080a8b5c9a23df6d329cfe"} Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.855611 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.856644 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.857996 4808 generic.go:334] "Generic (PLEG): container finished" podID="57300b85-6c7e-49da-bb14-40055f48a85c" containerID="bbcda24c56c4da1bf611a909ec28352a94064de773428161e7634b8284dbcb93" exitCode=0 Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.858054 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vvmq" event={"ID":"57300b85-6c7e-49da-bb14-40055f48a85c","Type":"ContainerDied","Data":"bbcda24c56c4da1bf611a909ec28352a94064de773428161e7634b8284dbcb93"} Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.863732 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.864338 4808 generic.go:334] "Generic (PLEG): container finished" podID="2f04008a-114c-4f19-971a-34fa574846f5" containerID="b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c" exitCode=0 Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.864432 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsbjl" event={"ID":"2f04008a-114c-4f19-971a-34fa574846f5","Type":"ContainerDied","Data":"b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c"} Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.897726 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" podStartSLOduration=17.897703613 podStartE2EDuration="17.897703613s" podCreationTimestamp="2026-02-17 15:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:08.894904848 +0000 UTC m=+192.411263921" watchObservedRunningTime="2026-02-17 15:57:08.897703613 +0000 UTC m=+192.414062686" Feb 17 15:57:08 crc kubenswrapper[4808]: I0217 15:57:08.971838 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" podStartSLOduration=17.971810197 podStartE2EDuration="17.971810197s" podCreationTimestamp="2026-02-17 15:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:08.959463414 +0000 UTC m=+192.475822487" watchObservedRunningTime="2026-02-17 15:57:08.971810197 +0000 UTC m=+192.488169270" Feb 17 15:57:09 crc kubenswrapper[4808]: I0217 15:57:09.154213 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8227d3a9-60f5-4d19-b4d1-8a0143864837" path="/var/lib/kubelet/pods/8227d3a9-60f5-4d19-b4d1-8a0143864837/volumes" Feb 17 15:57:09 crc kubenswrapper[4808]: I0217 15:57:09.885524 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsbjl" event={"ID":"2f04008a-114c-4f19-971a-34fa574846f5","Type":"ContainerStarted","Data":"0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add"} Feb 17 15:57:09 crc kubenswrapper[4808]: I0217 15:57:09.888448 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts9gs" event={"ID":"92dfded8-f453-4bfc-809e-e7ed7e25de27","Type":"ContainerStarted","Data":"79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8"} Feb 17 15:57:09 crc kubenswrapper[4808]: I0217 15:57:09.923233 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wsbjl" podStartSLOduration=3.579704015 podStartE2EDuration="35.92320537s" podCreationTimestamp="2026-02-17 15:56:34 +0000 UTC" firstStartedPulling="2026-02-17 15:56:37.026007109 +0000 UTC m=+160.542366182" lastFinishedPulling="2026-02-17 15:57:09.369508464 +0000 UTC m=+192.885867537" observedRunningTime="2026-02-17 15:57:09.917517276 +0000 UTC m=+193.433876349" watchObservedRunningTime="2026-02-17 15:57:09.92320537 +0000 UTC m=+193.439564443" Feb 17 15:57:09 crc kubenswrapper[4808]: I0217 15:57:09.946239 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ts9gs" podStartSLOduration=2.7205383899999998 podStartE2EDuration="33.946216012s" podCreationTimestamp="2026-02-17 15:56:36 +0000 UTC" firstStartedPulling="2026-02-17 15:56:38.170198756 +0000 UTC m=+161.686557829" lastFinishedPulling="2026-02-17 15:57:09.395876378 +0000 UTC m=+192.912235451" observedRunningTime="2026-02-17 15:57:09.939807719 +0000 UTC m=+193.456166792" watchObservedRunningTime="2026-02-17 15:57:09.946216012 +0000 UTC m=+193.462575085" Feb 17 15:57:10 crc kubenswrapper[4808]: I0217 15:57:10.896459 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vvmq" event={"ID":"57300b85-6c7e-49da-bb14-40055f48a85c","Type":"ContainerStarted","Data":"4af04fd40045e9e7dfaadf911b9f31ed6ee225c9d6497d579fe01321855f1de4"} Feb 17 15:57:10 crc kubenswrapper[4808]: I0217 15:57:10.922881 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6vvmq" podStartSLOduration=4.26199051 podStartE2EDuration="36.922851328s" podCreationTimestamp="2026-02-17 15:56:34 +0000 UTC" firstStartedPulling="2026-02-17 15:56:37.109512128 +0000 UTC m=+160.625871201" lastFinishedPulling="2026-02-17 15:57:09.770372946 +0000 UTC m=+193.286732019" observedRunningTime="2026-02-17 15:57:10.91593599 +0000 UTC m=+194.432295063" watchObservedRunningTime="2026-02-17 15:57:10.922851328 +0000 UTC m=+194.439210401" Feb 17 15:57:11 crc kubenswrapper[4808]: I0217 15:57:11.386552 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn"] Feb 17 15:57:11 crc kubenswrapper[4808]: I0217 15:57:11.500167 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd"] Feb 17 15:57:11 crc kubenswrapper[4808]: I0217 15:57:11.901787 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" podUID="26b4e80a-42fe-4a5f-99f3-e9967587b72a" containerName="route-controller-manager" containerID="cri-o://cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c" gracePeriod=30 Feb 17 15:57:11 crc kubenswrapper[4808]: I0217 15:57:11.903357 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" podUID="013c1a2d-19c5-47a3-ae05-f202eac66987" containerName="controller-manager" containerID="cri-o://7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1" gracePeriod=30 Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.396642 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.466467 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495475 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-proxy-ca-bundles\") pod \"013c1a2d-19c5-47a3-ae05-f202eac66987\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495519 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b4e80a-42fe-4a5f-99f3-e9967587b72a-serving-cert\") pod \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495544 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-config\") pod \"013c1a2d-19c5-47a3-ae05-f202eac66987\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495561 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-config\") pod \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495598 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-client-ca\") pod \"013c1a2d-19c5-47a3-ae05-f202eac66987\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495617 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-client-ca\") pod \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495655 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/013c1a2d-19c5-47a3-ae05-f202eac66987-serving-cert\") pod \"013c1a2d-19c5-47a3-ae05-f202eac66987\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495672 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lc69\" (UniqueName: \"kubernetes.io/projected/26b4e80a-42fe-4a5f-99f3-e9967587b72a-kube-api-access-7lc69\") pod \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\" (UID: \"26b4e80a-42fe-4a5f-99f3-e9967587b72a\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.495697 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxsd5\" (UniqueName: \"kubernetes.io/projected/013c1a2d-19c5-47a3-ae05-f202eac66987-kube-api-access-lxsd5\") pod \"013c1a2d-19c5-47a3-ae05-f202eac66987\" (UID: \"013c1a2d-19c5-47a3-ae05-f202eac66987\") " Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.497359 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "013c1a2d-19c5-47a3-ae05-f202eac66987" (UID: "013c1a2d-19c5-47a3-ae05-f202eac66987"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.497462 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-config" (OuterVolumeSpecName: "config") pod "013c1a2d-19c5-47a3-ae05-f202eac66987" (UID: "013c1a2d-19c5-47a3-ae05-f202eac66987"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.497480 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-config" (OuterVolumeSpecName: "config") pod "26b4e80a-42fe-4a5f-99f3-e9967587b72a" (UID: "26b4e80a-42fe-4a5f-99f3-e9967587b72a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.497710 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-client-ca" (OuterVolumeSpecName: "client-ca") pod "26b4e80a-42fe-4a5f-99f3-e9967587b72a" (UID: "26b4e80a-42fe-4a5f-99f3-e9967587b72a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.498224 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-client-ca" (OuterVolumeSpecName: "client-ca") pod "013c1a2d-19c5-47a3-ae05-f202eac66987" (UID: "013c1a2d-19c5-47a3-ae05-f202eac66987"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.503802 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b4e80a-42fe-4a5f-99f3-e9967587b72a-kube-api-access-7lc69" (OuterVolumeSpecName: "kube-api-access-7lc69") pod "26b4e80a-42fe-4a5f-99f3-e9967587b72a" (UID: "26b4e80a-42fe-4a5f-99f3-e9967587b72a"). InnerVolumeSpecName "kube-api-access-7lc69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.503939 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/013c1a2d-19c5-47a3-ae05-f202eac66987-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "013c1a2d-19c5-47a3-ae05-f202eac66987" (UID: "013c1a2d-19c5-47a3-ae05-f202eac66987"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.504134 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/013c1a2d-19c5-47a3-ae05-f202eac66987-kube-api-access-lxsd5" (OuterVolumeSpecName: "kube-api-access-lxsd5") pod "013c1a2d-19c5-47a3-ae05-f202eac66987" (UID: "013c1a2d-19c5-47a3-ae05-f202eac66987"). InnerVolumeSpecName "kube-api-access-lxsd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.506659 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26b4e80a-42fe-4a5f-99f3-e9967587b72a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "26b4e80a-42fe-4a5f-99f3-e9967587b72a" (UID: "26b4e80a-42fe-4a5f-99f3-e9967587b72a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597099 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597147 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b4e80a-42fe-4a5f-99f3-e9967587b72a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597162 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597175 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597187 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/013c1a2d-19c5-47a3-ae05-f202eac66987-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597200 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/26b4e80a-42fe-4a5f-99f3-e9967587b72a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597213 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/013c1a2d-19c5-47a3-ae05-f202eac66987-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597226 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lc69\" (UniqueName: \"kubernetes.io/projected/26b4e80a-42fe-4a5f-99f3-e9967587b72a-kube-api-access-7lc69\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.597242 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxsd5\" (UniqueName: \"kubernetes.io/projected/013c1a2d-19c5-47a3-ae05-f202eac66987-kube-api-access-lxsd5\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.799126 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58c84966cb-66dmv"] Feb 17 15:57:12 crc kubenswrapper[4808]: E0217 15:57:12.799666 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013c1a2d-19c5-47a3-ae05-f202eac66987" containerName="controller-manager" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.799698 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="013c1a2d-19c5-47a3-ae05-f202eac66987" containerName="controller-manager" Feb 17 15:57:12 crc kubenswrapper[4808]: E0217 15:57:12.799723 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b4e80a-42fe-4a5f-99f3-e9967587b72a" containerName="route-controller-manager" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.799735 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b4e80a-42fe-4a5f-99f3-e9967587b72a" containerName="route-controller-manager" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.799993 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="013c1a2d-19c5-47a3-ae05-f202eac66987" containerName="controller-manager" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.800016 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b4e80a-42fe-4a5f-99f3-e9967587b72a" containerName="route-controller-manager" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.800793 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.807933 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq"] Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.808915 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.812470 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c84966cb-66dmv"] Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.815359 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq"] Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.913512 4808 generic.go:334] "Generic (PLEG): container finished" podID="013c1a2d-19c5-47a3-ae05-f202eac66987" containerID="7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1" exitCode=0 Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.913675 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" event={"ID":"013c1a2d-19c5-47a3-ae05-f202eac66987","Type":"ContainerDied","Data":"7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1"} Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.913996 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" event={"ID":"013c1a2d-19c5-47a3-ae05-f202eac66987","Type":"ContainerDied","Data":"3db618564a6ec77c73d367392142f19b47c4dabc393708105b57bd64c94ec953"} Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.914017 4808 scope.go:117] "RemoveContainer" containerID="7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.913710 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.918243 4808 generic.go:334] "Generic (PLEG): container finished" podID="26b4e80a-42fe-4a5f-99f3-e9967587b72a" containerID="cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c" exitCode=0 Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.918314 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.918344 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" event={"ID":"26b4e80a-42fe-4a5f-99f3-e9967587b72a","Type":"ContainerDied","Data":"cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c"} Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.918586 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd" event={"ID":"26b4e80a-42fe-4a5f-99f3-e9967587b72a","Type":"ContainerDied","Data":"d29a4ebc7c0c8249cf9b6afd154a8e3281a9da7692080a8b5c9a23df6d329cfe"} Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.943926 4808 scope.go:117] "RemoveContainer" containerID="7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1" Feb 17 15:57:12 crc kubenswrapper[4808]: E0217 15:57:12.948729 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1\": container with ID starting with 7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1 not found: ID does not exist" containerID="7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.948871 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1"} err="failed to get container status \"7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1\": rpc error: code = NotFound desc = could not find container \"7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1\": container with ID starting with 7406868a2293d8950f1d4eab45dbd36bf1a8a3819755cbb814f90b1c5517b8b1 not found: ID does not exist" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.949021 4808 scope.go:117] "RemoveContainer" containerID="cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.967235 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn"] Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.972881 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5bfbf6ffb-5h8qn"] Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.983076 4808 scope.go:117] "RemoveContainer" containerID="cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c" Feb 17 15:57:12 crc kubenswrapper[4808]: E0217 15:57:12.985176 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c\": container with ID starting with cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c not found: ID does not exist" containerID="cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c" Feb 17 15:57:12 crc kubenswrapper[4808]: I0217 15:57:12.985322 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c"} err="failed to get container status \"cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c\": rpc error: code = NotFound desc = could not find container \"cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c\": container with ID starting with cb87e90bf76e5d5089065094e76d13badc3d77135b619ab84f905d563062244c not found: ID does not exist" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.002120 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd"] Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.004301 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9455640b-d252-4198-b7df-a410bf7df2fe-serving-cert\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.004377 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdvvn\" (UniqueName: \"kubernetes.io/projected/9455640b-d252-4198-b7df-a410bf7df2fe-kube-api-access-mdvvn\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.004420 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-client-ca\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.004431 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5797f68d88-nqrfd"] Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.004460 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-proxy-ca-bundles\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.004514 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-serving-cert\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.011418 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-client-ca\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.011516 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-config\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.011564 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b42c\" (UniqueName: \"kubernetes.io/projected/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-kube-api-access-9b42c\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.011617 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-config\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.112692 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-config\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.112777 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b42c\" (UniqueName: \"kubernetes.io/projected/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-kube-api-access-9b42c\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.112811 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-config\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.112847 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9455640b-d252-4198-b7df-a410bf7df2fe-serving-cert\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.112870 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdvvn\" (UniqueName: \"kubernetes.io/projected/9455640b-d252-4198-b7df-a410bf7df2fe-kube-api-access-mdvvn\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.112890 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-client-ca\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.112972 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-proxy-ca-bundles\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.113035 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-serving-cert\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.113061 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-client-ca\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.115299 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-client-ca\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.115409 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-config\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.115428 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-config\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.115534 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-client-ca\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.116920 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-proxy-ca-bundles\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.120214 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9455640b-d252-4198-b7df-a410bf7df2fe-serving-cert\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.122900 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-serving-cert\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.132502 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdvvn\" (UniqueName: \"kubernetes.io/projected/9455640b-d252-4198-b7df-a410bf7df2fe-kube-api-access-mdvvn\") pod \"route-controller-manager-79d5bcd6bf-cd2bq\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.133519 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b42c\" (UniqueName: \"kubernetes.io/projected/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-kube-api-access-9b42c\") pod \"controller-manager-58c84966cb-66dmv\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.140392 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.153153 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.153471 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="013c1a2d-19c5-47a3-ae05-f202eac66987" path="/var/lib/kubelet/pods/013c1a2d-19c5-47a3-ae05-f202eac66987/volumes" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.154243 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b4e80a-42fe-4a5f-99f3-e9967587b72a" path="/var/lib/kubelet/pods/26b4e80a-42fe-4a5f-99f3-e9967587b72a/volumes" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.407817 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c84966cb-66dmv"] Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.464695 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq"] Feb 17 15:57:13 crc kubenswrapper[4808]: W0217 15:57:13.486847 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9455640b_d252_4198_b7df_a410bf7df2fe.slice/crio-327f5a42044ba8a23bba834cc735ee73f16c693a4050fd5db7f91b4968d83e39 WatchSource:0}: Error finding container 327f5a42044ba8a23bba834cc735ee73f16c693a4050fd5db7f91b4968d83e39: Status 404 returned error can't find the container with id 327f5a42044ba8a23bba834cc735ee73f16c693a4050fd5db7f91b4968d83e39 Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.928104 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" event={"ID":"9455640b-d252-4198-b7df-a410bf7df2fe","Type":"ContainerStarted","Data":"2c9dbd682946c3e5c2cfca8b85377da096ea534bb79d801e3a40476342b68450"} Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.928212 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" event={"ID":"9455640b-d252-4198-b7df-a410bf7df2fe","Type":"ContainerStarted","Data":"327f5a42044ba8a23bba834cc735ee73f16c693a4050fd5db7f91b4968d83e39"} Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.928472 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.930905 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" event={"ID":"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb","Type":"ContainerStarted","Data":"04835832bfc8343ab9fa813877ab509d95417e7a4406a2dd5c0ba0c9d44fac95"} Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.930961 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" event={"ID":"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb","Type":"ContainerStarted","Data":"5a6cae267669bf9865700e7923e707ca2f9a9c9fd07c5ade06fb9066e508ae1a"} Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.931063 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.938371 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.947920 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" podStartSLOduration=2.947901166 podStartE2EDuration="2.947901166s" podCreationTimestamp="2026-02-17 15:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:13.94546286 +0000 UTC m=+197.461821933" watchObservedRunningTime="2026-02-17 15:57:13.947901166 +0000 UTC m=+197.464260239" Feb 17 15:57:13 crc kubenswrapper[4808]: I0217 15:57:13.967357 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" podStartSLOduration=2.967332302 podStartE2EDuration="2.967332302s" podCreationTimestamp="2026-02-17 15:57:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:13.962199503 +0000 UTC m=+197.478558566" watchObservedRunningTime="2026-02-17 15:57:13.967332302 +0000 UTC m=+197.483691375" Feb 17 15:57:14 crc kubenswrapper[4808]: I0217 15:57:14.154170 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.009086 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.010604 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.159480 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.185560 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.185651 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.222939 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.820075 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.821114 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.825641 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.826091 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.829483 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.955278 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a35eed2-a26d-4fc0-9daa-41e30256780e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.955332 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a35eed2-a26d-4fc0-9daa-41e30256780e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:15 crc kubenswrapper[4808]: I0217 15:57:15.994644 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.005877 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.057243 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a35eed2-a26d-4fc0-9daa-41e30256780e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.057322 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a35eed2-a26d-4fc0-9daa-41e30256780e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.057930 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a35eed2-a26d-4fc0-9daa-41e30256780e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.079814 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a35eed2-a26d-4fc0-9daa-41e30256780e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.204966 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.487219 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.920755 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.921181 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.953751 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2a35eed2-a26d-4fc0-9daa-41e30256780e","Type":"ContainerStarted","Data":"8a67257d2f9fdfe95a5cbf4aabe44195eecc463b7d295e846399994ff28b484b"} Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.953813 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2a35eed2-a26d-4fc0-9daa-41e30256780e","Type":"ContainerStarted","Data":"56afd58e8a64a79de748ecc17d0404972690d47a6f6b7d4f90f438cdb2799a9f"} Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.965984 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:57:16 crc kubenswrapper[4808]: I0217 15:57:16.968681 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.968657348 podStartE2EDuration="1.968657348s" podCreationTimestamp="2026-02-17 15:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:16.968019301 +0000 UTC m=+200.484378384" watchObservedRunningTime="2026-02-17 15:57:16.968657348 +0000 UTC m=+200.485016421" Feb 17 15:57:17 crc kubenswrapper[4808]: I0217 15:57:17.021789 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:57:17 crc kubenswrapper[4808]: I0217 15:57:17.961232 4808 generic.go:334] "Generic (PLEG): container finished" podID="2a35eed2-a26d-4fc0-9daa-41e30256780e" containerID="8a67257d2f9fdfe95a5cbf4aabe44195eecc463b7d295e846399994ff28b484b" exitCode=0 Feb 17 15:57:17 crc kubenswrapper[4808]: I0217 15:57:17.961341 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2a35eed2-a26d-4fc0-9daa-41e30256780e","Type":"ContainerDied","Data":"8a67257d2f9fdfe95a5cbf4aabe44195eecc463b7d295e846399994ff28b484b"} Feb 17 15:57:17 crc kubenswrapper[4808]: I0217 15:57:17.971325 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hn7fn" event={"ID":"a1db3ff7-c43f-412e-ab72-3d592b6352b0","Type":"ContainerStarted","Data":"56e991bdc7726b6c61887160d04bc51376a606946a766ba535be7f736adc85e3"} Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.037379 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6vvmq"] Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.237072 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wsbjl"] Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.237762 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wsbjl" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="registry-server" containerID="cri-o://0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add" gracePeriod=2 Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.634239 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.801234 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-utilities\") pod \"2f04008a-114c-4f19-971a-34fa574846f5\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.801318 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4v4z\" (UniqueName: \"kubernetes.io/projected/2f04008a-114c-4f19-971a-34fa574846f5-kube-api-access-z4v4z\") pod \"2f04008a-114c-4f19-971a-34fa574846f5\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.801407 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-catalog-content\") pod \"2f04008a-114c-4f19-971a-34fa574846f5\" (UID: \"2f04008a-114c-4f19-971a-34fa574846f5\") " Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.802108 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-utilities" (OuterVolumeSpecName: "utilities") pod "2f04008a-114c-4f19-971a-34fa574846f5" (UID: "2f04008a-114c-4f19-971a-34fa574846f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.825511 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f04008a-114c-4f19-971a-34fa574846f5-kube-api-access-z4v4z" (OuterVolumeSpecName: "kube-api-access-z4v4z") pod "2f04008a-114c-4f19-971a-34fa574846f5" (UID: "2f04008a-114c-4f19-971a-34fa574846f5"). InnerVolumeSpecName "kube-api-access-z4v4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.903721 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.903762 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4v4z\" (UniqueName: \"kubernetes.io/projected/2f04008a-114c-4f19-971a-34fa574846f5-kube-api-access-z4v4z\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.979700 4808 generic.go:334] "Generic (PLEG): container finished" podID="2f04008a-114c-4f19-971a-34fa574846f5" containerID="0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add" exitCode=0 Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.979775 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsbjl" event={"ID":"2f04008a-114c-4f19-971a-34fa574846f5","Type":"ContainerDied","Data":"0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add"} Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.979831 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsbjl" event={"ID":"2f04008a-114c-4f19-971a-34fa574846f5","Type":"ContainerDied","Data":"735c6effafb73a77d28e55e021aec1242fb9a889fb9fde23203faa6b85d31dbc"} Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.979848 4808 scope.go:117] "RemoveContainer" containerID="0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add" Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.979988 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsbjl" Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.992978 4808 generic.go:334] "Generic (PLEG): container finished" podID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerID="56e991bdc7726b6c61887160d04bc51376a606946a766ba535be7f736adc85e3" exitCode=0 Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.993243 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hn7fn" event={"ID":"a1db3ff7-c43f-412e-ab72-3d592b6352b0","Type":"ContainerDied","Data":"56e991bdc7726b6c61887160d04bc51376a606946a766ba535be7f736adc85e3"} Feb 17 15:57:18 crc kubenswrapper[4808]: I0217 15:57:18.993892 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6vvmq" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="registry-server" containerID="cri-o://4af04fd40045e9e7dfaadf911b9f31ed6ee225c9d6497d579fe01321855f1de4" gracePeriod=2 Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.038042 4808 scope.go:117] "RemoveContainer" containerID="b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.068498 4808 scope.go:117] "RemoveContainer" containerID="f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.084063 4808 scope.go:117] "RemoveContainer" containerID="0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add" Feb 17 15:57:19 crc kubenswrapper[4808]: E0217 15:57:19.084887 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add\": container with ID starting with 0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add not found: ID does not exist" containerID="0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.084931 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add"} err="failed to get container status \"0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add\": rpc error: code = NotFound desc = could not find container \"0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add\": container with ID starting with 0e7ffda38dadb23c7fa43fc3d035ca26df0c3b1d59fe1979ae7c5702a3647add not found: ID does not exist" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.084961 4808 scope.go:117] "RemoveContainer" containerID="b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c" Feb 17 15:57:19 crc kubenswrapper[4808]: E0217 15:57:19.085310 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c\": container with ID starting with b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c not found: ID does not exist" containerID="b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.085446 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c"} err="failed to get container status \"b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c\": rpc error: code = NotFound desc = could not find container \"b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c\": container with ID starting with b4900ba4eb2857f22d6e65bf801ac98b6168df05a60b82365a27f7fac0951d6c not found: ID does not exist" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.085468 4808 scope.go:117] "RemoveContainer" containerID="f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5" Feb 17 15:57:19 crc kubenswrapper[4808]: E0217 15:57:19.086041 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5\": container with ID starting with f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5 not found: ID does not exist" containerID="f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.086077 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5"} err="failed to get container status \"f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5\": rpc error: code = NotFound desc = could not find container \"f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5\": container with ID starting with f9c248e0102ac7a597ac6e8de2b6e8d0d34fbaee650f849f4734c52dfbfaedd5 not found: ID does not exist" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.252297 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f04008a-114c-4f19-971a-34fa574846f5" (UID: "2f04008a-114c-4f19-971a-34fa574846f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.281786 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.317166 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wsbjl"] Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.321194 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wsbjl"] Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.341486 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f04008a-114c-4f19-971a-34fa574846f5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.442555 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a35eed2-a26d-4fc0-9daa-41e30256780e-kube-api-access\") pod \"2a35eed2-a26d-4fc0-9daa-41e30256780e\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.442907 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a35eed2-a26d-4fc0-9daa-41e30256780e-kubelet-dir\") pod \"2a35eed2-a26d-4fc0-9daa-41e30256780e\" (UID: \"2a35eed2-a26d-4fc0-9daa-41e30256780e\") " Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.443045 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a35eed2-a26d-4fc0-9daa-41e30256780e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2a35eed2-a26d-4fc0-9daa-41e30256780e" (UID: "2a35eed2-a26d-4fc0-9daa-41e30256780e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.443197 4808 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a35eed2-a26d-4fc0-9daa-41e30256780e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.451081 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a35eed2-a26d-4fc0-9daa-41e30256780e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2a35eed2-a26d-4fc0-9daa-41e30256780e" (UID: "2a35eed2-a26d-4fc0-9daa-41e30256780e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:19 crc kubenswrapper[4808]: I0217 15:57:19.544645 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a35eed2-a26d-4fc0-9daa-41e30256780e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.008718 4808 generic.go:334] "Generic (PLEG): container finished" podID="57300b85-6c7e-49da-bb14-40055f48a85c" containerID="4af04fd40045e9e7dfaadf911b9f31ed6ee225c9d6497d579fe01321855f1de4" exitCode=0 Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.009733 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vvmq" event={"ID":"57300b85-6c7e-49da-bb14-40055f48a85c","Type":"ContainerDied","Data":"4af04fd40045e9e7dfaadf911b9f31ed6ee225c9d6497d579fe01321855f1de4"} Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.018413 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jsrz" event={"ID":"e22d34a8-92f6-4a2a-a0f5-e063c25afac1","Type":"ContainerStarted","Data":"616c2fdd03b2d5398b274f5ab3d43d25dcd8bacb210382e6b982a39d3da41dd3"} Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.022756 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2a35eed2-a26d-4fc0-9daa-41e30256780e","Type":"ContainerDied","Data":"56afd58e8a64a79de748ecc17d0404972690d47a6f6b7d4f90f438cdb2799a9f"} Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.022796 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56afd58e8a64a79de748ecc17d0404972690d47a6f6b7d4f90f438cdb2799a9f" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.022850 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.029317 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hn7fn" event={"ID":"a1db3ff7-c43f-412e-ab72-3d592b6352b0","Type":"ContainerStarted","Data":"ab1f4fdafb32d3b5b88908e1013b0deb27471f76f61f16612081d0858b9c0b31"} Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.056122 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.095971 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hn7fn" podStartSLOduration=3.682103454 podStartE2EDuration="46.095939282s" podCreationTimestamp="2026-02-17 15:56:34 +0000 UTC" firstStartedPulling="2026-02-17 15:56:37.081235122 +0000 UTC m=+160.597594195" lastFinishedPulling="2026-02-17 15:57:19.49507094 +0000 UTC m=+203.011430023" observedRunningTime="2026-02-17 15:57:20.068384406 +0000 UTC m=+203.584743529" watchObservedRunningTime="2026-02-17 15:57:20.095939282 +0000 UTC m=+203.612298395" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.253492 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-utilities\") pod \"57300b85-6c7e-49da-bb14-40055f48a85c\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.253653 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-catalog-content\") pod \"57300b85-6c7e-49da-bb14-40055f48a85c\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.253718 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzvbx\" (UniqueName: \"kubernetes.io/projected/57300b85-6c7e-49da-bb14-40055f48a85c-kube-api-access-pzvbx\") pod \"57300b85-6c7e-49da-bb14-40055f48a85c\" (UID: \"57300b85-6c7e-49da-bb14-40055f48a85c\") " Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.254352 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-utilities" (OuterVolumeSpecName: "utilities") pod "57300b85-6c7e-49da-bb14-40055f48a85c" (UID: "57300b85-6c7e-49da-bb14-40055f48a85c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.263218 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57300b85-6c7e-49da-bb14-40055f48a85c-kube-api-access-pzvbx" (OuterVolumeSpecName: "kube-api-access-pzvbx") pod "57300b85-6c7e-49da-bb14-40055f48a85c" (UID: "57300b85-6c7e-49da-bb14-40055f48a85c"). InnerVolumeSpecName "kube-api-access-pzvbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.312263 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57300b85-6c7e-49da-bb14-40055f48a85c" (UID: "57300b85-6c7e-49da-bb14-40055f48a85c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.354877 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.354927 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57300b85-6c7e-49da-bb14-40055f48a85c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.354940 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzvbx\" (UniqueName: \"kubernetes.io/projected/57300b85-6c7e-49da-bb14-40055f48a85c-kube-api-access-pzvbx\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.436557 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts9gs"] Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.436863 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ts9gs" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="registry-server" containerID="cri-o://79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8" gracePeriod=2 Feb 17 15:57:20 crc kubenswrapper[4808]: I0217 15:57:20.885316 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.039846 4808 generic.go:334] "Generic (PLEG): container finished" podID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerID="616c2fdd03b2d5398b274f5ab3d43d25dcd8bacb210382e6b982a39d3da41dd3" exitCode=0 Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.039864 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jsrz" event={"ID":"e22d34a8-92f6-4a2a-a0f5-e063c25afac1","Type":"ContainerDied","Data":"616c2fdd03b2d5398b274f5ab3d43d25dcd8bacb210382e6b982a39d3da41dd3"} Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.045064 4808 generic.go:334] "Generic (PLEG): container finished" podID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerID="79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8" exitCode=0 Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.045133 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts9gs" event={"ID":"92dfded8-f453-4bfc-809e-e7ed7e25de27","Type":"ContainerDied","Data":"79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8"} Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.045183 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts9gs" event={"ID":"92dfded8-f453-4bfc-809e-e7ed7e25de27","Type":"ContainerDied","Data":"f4563d14e850e83b34a7ac316296bd63282dec1b6828a89346f08302aa89387a"} Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.045199 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts9gs" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.045212 4808 scope.go:117] "RemoveContainer" containerID="79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.050808 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6vvmq" event={"ID":"57300b85-6c7e-49da-bb14-40055f48a85c","Type":"ContainerDied","Data":"978f619d6b3d5011491c32f00a6237544c3cbc039e50f7389d14d76374df3c9e"} Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.050932 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6vvmq" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.064858 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-catalog-content\") pod \"92dfded8-f453-4bfc-809e-e7ed7e25de27\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.064926 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbjtv\" (UniqueName: \"kubernetes.io/projected/92dfded8-f453-4bfc-809e-e7ed7e25de27-kube-api-access-kbjtv\") pod \"92dfded8-f453-4bfc-809e-e7ed7e25de27\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.064958 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-utilities\") pod \"92dfded8-f453-4bfc-809e-e7ed7e25de27\" (UID: \"92dfded8-f453-4bfc-809e-e7ed7e25de27\") " Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.065877 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-utilities" (OuterVolumeSpecName: "utilities") pod "92dfded8-f453-4bfc-809e-e7ed7e25de27" (UID: "92dfded8-f453-4bfc-809e-e7ed7e25de27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.072218 4808 scope.go:117] "RemoveContainer" containerID="05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.086445 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfded8-f453-4bfc-809e-e7ed7e25de27-kube-api-access-kbjtv" (OuterVolumeSpecName: "kube-api-access-kbjtv") pod "92dfded8-f453-4bfc-809e-e7ed7e25de27" (UID: "92dfded8-f453-4bfc-809e-e7ed7e25de27"). InnerVolumeSpecName "kube-api-access-kbjtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.099911 4808 scope.go:117] "RemoveContainer" containerID="9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.110509 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92dfded8-f453-4bfc-809e-e7ed7e25de27" (UID: "92dfded8-f453-4bfc-809e-e7ed7e25de27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.132496 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6vvmq"] Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.135635 4808 scope.go:117] "RemoveContainer" containerID="79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.135924 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6vvmq"] Feb 17 15:57:21 crc kubenswrapper[4808]: E0217 15:57:21.136224 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8\": container with ID starting with 79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8 not found: ID does not exist" containerID="79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.136703 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8"} err="failed to get container status \"79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8\": rpc error: code = NotFound desc = could not find container \"79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8\": container with ID starting with 79c59f236601db2e02bc2df82891cddc398d12a9a7f46934d64515020f07caa8 not found: ID does not exist" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.136759 4808 scope.go:117] "RemoveContainer" containerID="05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190" Feb 17 15:57:21 crc kubenswrapper[4808]: E0217 15:57:21.137384 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190\": container with ID starting with 05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190 not found: ID does not exist" containerID="05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.137438 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190"} err="failed to get container status \"05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190\": rpc error: code = NotFound desc = could not find container \"05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190\": container with ID starting with 05108c0dc38f3bc05084f54e3c00bb8e1ea701f996797f792c1317ab21953190 not found: ID does not exist" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.137476 4808 scope.go:117] "RemoveContainer" containerID="9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2" Feb 17 15:57:21 crc kubenswrapper[4808]: E0217 15:57:21.137830 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2\": container with ID starting with 9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2 not found: ID does not exist" containerID="9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.137868 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2"} err="failed to get container status \"9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2\": rpc error: code = NotFound desc = could not find container \"9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2\": container with ID starting with 9354679fc175439a552de7724a5e6bda5b9e9fec4478f89999a50a2ea884f0d2 not found: ID does not exist" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.137893 4808 scope.go:117] "RemoveContainer" containerID="4af04fd40045e9e7dfaadf911b9f31ed6ee225c9d6497d579fe01321855f1de4" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.150329 4808 scope.go:117] "RemoveContainer" containerID="bbcda24c56c4da1bf611a909ec28352a94064de773428161e7634b8284dbcb93" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.152869 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f04008a-114c-4f19-971a-34fa574846f5" path="/var/lib/kubelet/pods/2f04008a-114c-4f19-971a-34fa574846f5/volumes" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.153490 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" path="/var/lib/kubelet/pods/57300b85-6c7e-49da-bb14-40055f48a85c/volumes" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.166158 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.166188 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbjtv\" (UniqueName: \"kubernetes.io/projected/92dfded8-f453-4bfc-809e-e7ed7e25de27-kube-api-access-kbjtv\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.166198 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92dfded8-f453-4bfc-809e-e7ed7e25de27-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.173967 4808 scope.go:117] "RemoveContainer" containerID="a0e2eeefc3bf87bde55affaedf8d295a474fecb9dcf906520b5bc6b26957f78c" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.393210 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts9gs"] Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.399935 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts9gs"] Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.592558 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.592827 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.592900 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.593858 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 15:57:21 crc kubenswrapper[4808]: I0217 15:57:21.593935 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9" gracePeriod=600 Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.062598 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9" exitCode=0 Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.063071 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9"} Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.063105 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"77d27579afc79c7f9499a81b219b4983465c9c8999e7fd27d50b7990ea6072c1"} Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.066733 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jsrz" event={"ID":"e22d34a8-92f6-4a2a-a0f5-e063c25afac1","Type":"ContainerStarted","Data":"aa3fed03abacd35eb7bb1f3065835e28313c3e4962262338c33f30c7827d8852"} Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.104206 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8jsrz" podStartSLOduration=2.8835563669999997 podStartE2EDuration="45.104177748s" podCreationTimestamp="2026-02-17 15:56:37 +0000 UTC" firstStartedPulling="2026-02-17 15:56:39.23291018 +0000 UTC m=+162.749269253" lastFinishedPulling="2026-02-17 15:57:21.453531551 +0000 UTC m=+204.969890634" observedRunningTime="2026-02-17 15:57:22.097912049 +0000 UTC m=+205.614271162" watchObservedRunningTime="2026-02-17 15:57:22.104177748 +0000 UTC m=+205.620536831" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818044 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818776 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="extract-utilities" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818789 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="extract-utilities" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818801 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a35eed2-a26d-4fc0-9daa-41e30256780e" containerName="pruner" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818807 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a35eed2-a26d-4fc0-9daa-41e30256780e" containerName="pruner" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818815 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="extract-content" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818824 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="extract-content" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818831 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="extract-content" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818837 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="extract-content" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818851 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818857 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818869 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818875 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818884 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="extract-utilities" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818892 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="extract-utilities" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818902 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="extract-content" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818909 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="extract-content" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818916 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818923 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: E0217 15:57:22.818932 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="extract-utilities" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.818938 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="extract-utilities" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.819048 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.819059 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a35eed2-a26d-4fc0-9daa-41e30256780e" containerName="pruner" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.819068 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f04008a-114c-4f19-971a-34fa574846f5" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.819081 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="57300b85-6c7e-49da-bb14-40055f48a85c" containerName="registry-server" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.819529 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.822817 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.825682 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.831750 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.900356 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-var-lock\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.900421 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:22 crc kubenswrapper[4808]: I0217 15:57:22.900463 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kube-api-access\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.002365 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-var-lock\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.002452 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.002492 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kube-api-access\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.002597 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-var-lock\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.002590 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.042109 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kube-api-access\") pod \"installer-9-crc\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.139971 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.152134 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfded8-f453-4bfc-809e-e7ed7e25de27" path="/var/lib/kubelet/pods/92dfded8-f453-4bfc-809e-e7ed7e25de27/volumes" Feb 17 15:57:23 crc kubenswrapper[4808]: I0217 15:57:23.656977 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 17 15:57:23 crc kubenswrapper[4808]: W0217 15:57:23.687559 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3e6a81ca_0d6e_48d2_a0a2_ada5fcb8b25e.slice/crio-c7a19d1c77507692cfde7142aa7d8a5076017b742b37e3a0c970625447aea416 WatchSource:0}: Error finding container c7a19d1c77507692cfde7142aa7d8a5076017b742b37e3a0c970625447aea416: Status 404 returned error can't find the container with id c7a19d1c77507692cfde7142aa7d8a5076017b742b37e3a0c970625447aea416 Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.080188 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e","Type":"ContainerStarted","Data":"e259bf574b3e5b34a738dc5aa049367d026f2cbb8c3d1e0e5771dc0d329364c7"} Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.080694 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e","Type":"ContainerStarted","Data":"c7a19d1c77507692cfde7142aa7d8a5076017b742b37e3a0c970625447aea416"} Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.083734 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhtfr" event={"ID":"df27437e-6547-4705-bbe7-08a726639dbe","Type":"ContainerStarted","Data":"1704dbc2b68e2b10e28ffd609ebd58eead43e61a6bd1ead6a6230baca3c1409e"} Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.086834 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs597" event={"ID":"48efd125-e3aa-444d-91a3-fa915be48b46","Type":"ContainerStarted","Data":"2e27c972236a280162abd4cf4685ed84882d0bc3042df73d9e827a7ec611814e"} Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.105000 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.104782529 podStartE2EDuration="2.104782529s" podCreationTimestamp="2026-02-17 15:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:24.095045046 +0000 UTC m=+207.611404139" watchObservedRunningTime="2026-02-17 15:57:24.104782529 +0000 UTC m=+207.621141602" Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.787807 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.792808 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:57:24 crc kubenswrapper[4808]: I0217 15:57:24.845358 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:57:25 crc kubenswrapper[4808]: I0217 15:57:25.095871 4808 generic.go:334] "Generic (PLEG): container finished" podID="543b2019-8399-411e-8e8b-45787b96873f" containerID="335aab9c25e746284f138cf133ee4f794236186f62c6450d29a99ecbca2622cc" exitCode=0 Feb 17 15:57:25 crc kubenswrapper[4808]: I0217 15:57:25.095953 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22x8m" event={"ID":"543b2019-8399-411e-8e8b-45787b96873f","Type":"ContainerDied","Data":"335aab9c25e746284f138cf133ee4f794236186f62c6450d29a99ecbca2622cc"} Feb 17 15:57:25 crc kubenswrapper[4808]: I0217 15:57:25.099317 4808 generic.go:334] "Generic (PLEG): container finished" podID="48efd125-e3aa-444d-91a3-fa915be48b46" containerID="2e27c972236a280162abd4cf4685ed84882d0bc3042df73d9e827a7ec611814e" exitCode=0 Feb 17 15:57:25 crc kubenswrapper[4808]: I0217 15:57:25.099369 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs597" event={"ID":"48efd125-e3aa-444d-91a3-fa915be48b46","Type":"ContainerDied","Data":"2e27c972236a280162abd4cf4685ed84882d0bc3042df73d9e827a7ec611814e"} Feb 17 15:57:25 crc kubenswrapper[4808]: I0217 15:57:25.102591 4808 generic.go:334] "Generic (PLEG): container finished" podID="df27437e-6547-4705-bbe7-08a726639dbe" containerID="1704dbc2b68e2b10e28ffd609ebd58eead43e61a6bd1ead6a6230baca3c1409e" exitCode=0 Feb 17 15:57:25 crc kubenswrapper[4808]: I0217 15:57:25.102946 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhtfr" event={"ID":"df27437e-6547-4705-bbe7-08a726639dbe","Type":"ContainerDied","Data":"1704dbc2b68e2b10e28ffd609ebd58eead43e61a6bd1ead6a6230baca3c1409e"} Feb 17 15:57:25 crc kubenswrapper[4808]: I0217 15:57:25.164015 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.136217 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs597" event={"ID":"48efd125-e3aa-444d-91a3-fa915be48b46","Type":"ContainerStarted","Data":"1789b161d1d589d4f4b637bcd20330b171b3967cd4acb37da4ed2b0c3bffddf0"} Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.138227 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhtfr" event={"ID":"df27437e-6547-4705-bbe7-08a726639dbe","Type":"ContainerStarted","Data":"ab5bf34de9e08f53fdffa63c8df6a1c54b35f7cc20e2c243fa6aac5b8aadc2b5"} Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.141045 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22x8m" event={"ID":"543b2019-8399-411e-8e8b-45787b96873f","Type":"ContainerStarted","Data":"5e0ccb5571695b0a11ced97259c836c8ed65e804c680e02618b7b777ab17bf5c"} Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.160940 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cs597" podStartSLOduration=2.777519122 podStartE2EDuration="50.160918311s" podCreationTimestamp="2026-02-17 15:56:36 +0000 UTC" firstStartedPulling="2026-02-17 15:56:38.158954072 +0000 UTC m=+161.675313145" lastFinishedPulling="2026-02-17 15:57:25.542353261 +0000 UTC m=+209.058712334" observedRunningTime="2026-02-17 15:57:26.157338654 +0000 UTC m=+209.673697727" watchObservedRunningTime="2026-02-17 15:57:26.160918311 +0000 UTC m=+209.677277384" Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.178747 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-22x8m" podStartSLOduration=3.574077323 podStartE2EDuration="52.178725123s" podCreationTimestamp="2026-02-17 15:56:34 +0000 UTC" firstStartedPulling="2026-02-17 15:56:37.074696876 +0000 UTC m=+160.591055949" lastFinishedPulling="2026-02-17 15:57:25.679344676 +0000 UTC m=+209.195703749" observedRunningTime="2026-02-17 15:57:26.174763385 +0000 UTC m=+209.691122458" watchObservedRunningTime="2026-02-17 15:57:26.178725123 +0000 UTC m=+209.695084196" Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.216519 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qhtfr" podStartSLOduration=4.024033142 podStartE2EDuration="49.216495584s" podCreationTimestamp="2026-02-17 15:56:37 +0000 UTC" firstStartedPulling="2026-02-17 15:56:40.358900003 +0000 UTC m=+163.875259076" lastFinishedPulling="2026-02-17 15:57:25.551362445 +0000 UTC m=+209.067721518" observedRunningTime="2026-02-17 15:57:26.213633667 +0000 UTC m=+209.729992760" watchObservedRunningTime="2026-02-17 15:57:26.216495584 +0000 UTC m=+209.732854657" Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.567190 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:57:26 crc kubenswrapper[4808]: I0217 15:57:26.567278 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:57:27 crc kubenswrapper[4808]: I0217 15:57:27.611370 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cs597" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="registry-server" probeResult="failure" output=< Feb 17 15:57:27 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 15:57:27 crc kubenswrapper[4808]: > Feb 17 15:57:27 crc kubenswrapper[4808]: I0217 15:57:27.935222 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:57:27 crc kubenswrapper[4808]: I0217 15:57:27.935771 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:57:28 crc kubenswrapper[4808]: I0217 15:57:28.398703 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:57:28 crc kubenswrapper[4808]: I0217 15:57:28.399124 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:57:28 crc kubenswrapper[4808]: I0217 15:57:28.973943 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8jsrz" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="registry-server" probeResult="failure" output=< Feb 17 15:57:28 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 15:57:28 crc kubenswrapper[4808]: > Feb 17 15:57:29 crc kubenswrapper[4808]: I0217 15:57:29.466702 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qhtfr" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="registry-server" probeResult="failure" output=< Feb 17 15:57:29 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 15:57:29 crc kubenswrapper[4808]: > Feb 17 15:57:31 crc kubenswrapper[4808]: I0217 15:57:31.370388 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c84966cb-66dmv"] Feb 17 15:57:31 crc kubenswrapper[4808]: I0217 15:57:31.370701 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" podUID="a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" containerName="controller-manager" containerID="cri-o://04835832bfc8343ab9fa813877ab509d95417e7a4406a2dd5c0ba0c9d44fac95" gracePeriod=30 Feb 17 15:57:31 crc kubenswrapper[4808]: I0217 15:57:31.386757 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq"] Feb 17 15:57:31 crc kubenswrapper[4808]: I0217 15:57:31.388085 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" podUID="9455640b-d252-4198-b7df-a410bf7df2fe" containerName="route-controller-manager" containerID="cri-o://2c9dbd682946c3e5c2cfca8b85377da096ea534bb79d801e3a40476342b68450" gracePeriod=30 Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.210806 4808 generic.go:334] "Generic (PLEG): container finished" podID="9455640b-d252-4198-b7df-a410bf7df2fe" containerID="2c9dbd682946c3e5c2cfca8b85377da096ea534bb79d801e3a40476342b68450" exitCode=0 Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.210942 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" event={"ID":"9455640b-d252-4198-b7df-a410bf7df2fe","Type":"ContainerDied","Data":"2c9dbd682946c3e5c2cfca8b85377da096ea534bb79d801e3a40476342b68450"} Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.215306 4808 generic.go:334] "Generic (PLEG): container finished" podID="a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" containerID="04835832bfc8343ab9fa813877ab509d95417e7a4406a2dd5c0ba0c9d44fac95" exitCode=0 Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.215355 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" event={"ID":"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb","Type":"ContainerDied","Data":"04835832bfc8343ab9fa813877ab509d95417e7a4406a2dd5c0ba0c9d44fac95"} Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.523552 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.564445 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27"] Feb 17 15:57:32 crc kubenswrapper[4808]: E0217 15:57:32.564955 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9455640b-d252-4198-b7df-a410bf7df2fe" containerName="route-controller-manager" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.565027 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9455640b-d252-4198-b7df-a410bf7df2fe" containerName="route-controller-manager" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.565196 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9455640b-d252-4198-b7df-a410bf7df2fe" containerName="route-controller-manager" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.565859 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.588313 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27"] Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.613035 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.648243 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdvvn\" (UniqueName: \"kubernetes.io/projected/9455640b-d252-4198-b7df-a410bf7df2fe-kube-api-access-mdvvn\") pod \"9455640b-d252-4198-b7df-a410bf7df2fe\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.648316 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-client-ca\") pod \"9455640b-d252-4198-b7df-a410bf7df2fe\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.648360 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9455640b-d252-4198-b7df-a410bf7df2fe-serving-cert\") pod \"9455640b-d252-4198-b7df-a410bf7df2fe\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.648441 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-config\") pod \"9455640b-d252-4198-b7df-a410bf7df2fe\" (UID: \"9455640b-d252-4198-b7df-a410bf7df2fe\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.649948 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-client-ca" (OuterVolumeSpecName: "client-ca") pod "9455640b-d252-4198-b7df-a410bf7df2fe" (UID: "9455640b-d252-4198-b7df-a410bf7df2fe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.650068 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-config" (OuterVolumeSpecName: "config") pod "9455640b-d252-4198-b7df-a410bf7df2fe" (UID: "9455640b-d252-4198-b7df-a410bf7df2fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.656770 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9455640b-d252-4198-b7df-a410bf7df2fe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9455640b-d252-4198-b7df-a410bf7df2fe" (UID: "9455640b-d252-4198-b7df-a410bf7df2fe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.657163 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9455640b-d252-4198-b7df-a410bf7df2fe-kube-api-access-mdvvn" (OuterVolumeSpecName: "kube-api-access-mdvvn") pod "9455640b-d252-4198-b7df-a410bf7df2fe" (UID: "9455640b-d252-4198-b7df-a410bf7df2fe"). InnerVolumeSpecName "kube-api-access-mdvvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.750269 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-serving-cert\") pod \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.750365 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-client-ca\") pod \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.750439 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b42c\" (UniqueName: \"kubernetes.io/projected/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-kube-api-access-9b42c\") pod \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.750527 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-proxy-ca-bundles\") pod \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.750688 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-config\") pod \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\" (UID: \"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb\") " Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751078 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-client-ca\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751136 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvm6t\" (UniqueName: \"kubernetes.io/projected/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-kube-api-access-fvm6t\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751178 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-config\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751286 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-serving-cert\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751397 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdvvn\" (UniqueName: \"kubernetes.io/projected/9455640b-d252-4198-b7df-a410bf7df2fe-kube-api-access-mdvvn\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751424 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751444 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9455640b-d252-4198-b7df-a410bf7df2fe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751468 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9455640b-d252-4198-b7df-a410bf7df2fe-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.751784 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" (UID: "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.752409 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" (UID: "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.753034 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-config" (OuterVolumeSpecName: "config") pod "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" (UID: "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.753751 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-kube-api-access-9b42c" (OuterVolumeSpecName: "kube-api-access-9b42c") pod "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" (UID: "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb"). InnerVolumeSpecName "kube-api-access-9b42c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.754815 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" (UID: "a0b9abce-8b6f-4346-b18c-2bfb7e5982eb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.852773 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-serving-cert\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.853345 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-client-ca\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.853612 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvm6t\" (UniqueName: \"kubernetes.io/projected/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-kube-api-access-fvm6t\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.853891 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-config\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.854174 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.854330 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.854471 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.854624 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.854762 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b42c\" (UniqueName: \"kubernetes.io/projected/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb-kube-api-access-9b42c\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.855121 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-client-ca\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.855562 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-config\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.858775 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-serving-cert\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.883087 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvm6t\" (UniqueName: \"kubernetes.io/projected/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-kube-api-access-fvm6t\") pod \"route-controller-manager-567cdd88c5-bmx27\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:32 crc kubenswrapper[4808]: I0217 15:57:32.914858 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.224834 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" event={"ID":"a0b9abce-8b6f-4346-b18c-2bfb7e5982eb","Type":"ContainerDied","Data":"5a6cae267669bf9865700e7923e707ca2f9a9c9fd07c5ade06fb9066e508ae1a"} Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.226980 4808 scope.go:117] "RemoveContainer" containerID="04835832bfc8343ab9fa813877ab509d95417e7a4406a2dd5c0ba0c9d44fac95" Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.225060 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c84966cb-66dmv" Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.227968 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" event={"ID":"9455640b-d252-4198-b7df-a410bf7df2fe","Type":"ContainerDied","Data":"327f5a42044ba8a23bba834cc735ee73f16c693a4050fd5db7f91b4968d83e39"} Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.228094 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq" Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.265112 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-58c84966cb-66dmv"] Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.265928 4808 scope.go:117] "RemoveContainer" containerID="2c9dbd682946c3e5c2cfca8b85377da096ea534bb79d801e3a40476342b68450" Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.273356 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-58c84966cb-66dmv"] Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.278273 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq"] Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.282121 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79d5bcd6bf-cd2bq"] Feb 17 15:57:33 crc kubenswrapper[4808]: I0217 15:57:33.406564 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27"] Feb 17 15:57:33 crc kubenswrapper[4808]: W0217 15:57:33.417715 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80b9b1d0_5520_48a9_b0b7_2c524d8ba56d.slice/crio-70cc03d0f4a16d01a2409452eb79747f47e3f9835f1dc0806f2b12e87251321f WatchSource:0}: Error finding container 70cc03d0f4a16d01a2409452eb79747f47e3f9835f1dc0806f2b12e87251321f: Status 404 returned error can't find the container with id 70cc03d0f4a16d01a2409452eb79747f47e3f9835f1dc0806f2b12e87251321f Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.243542 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" event={"ID":"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d","Type":"ContainerStarted","Data":"29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028"} Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.244129 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" event={"ID":"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d","Type":"ContainerStarted","Data":"70cc03d0f4a16d01a2409452eb79747f47e3f9835f1dc0806f2b12e87251321f"} Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.244733 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.253691 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.270703 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" podStartSLOduration=3.270675056 podStartE2EDuration="3.270675056s" podCreationTimestamp="2026-02-17 15:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:34.268508747 +0000 UTC m=+217.784867910" watchObservedRunningTime="2026-02-17 15:57:34.270675056 +0000 UTC m=+217.787034159" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.596226 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.596304 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.662434 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.815964 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8594bddbbb-l7kxx"] Feb 17 15:57:34 crc kubenswrapper[4808]: E0217 15:57:34.816457 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" containerName="controller-manager" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.816492 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" containerName="controller-manager" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.816779 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" containerName="controller-manager" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.817655 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.821265 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.821692 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.821910 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.822132 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.822503 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.822735 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.834747 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8594bddbbb-l7kxx"] Feb 17 15:57:34 crc kubenswrapper[4808]: I0217 15:57:34.838069 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.006614 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-proxy-ca-bundles\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.006766 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-config\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.006863 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-client-ca\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.007094 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49796a9-6bb5-4e1a-a203-95feb121a71b-serving-cert\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.007264 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwn94\" (UniqueName: \"kubernetes.io/projected/d49796a9-6bb5-4e1a-a203-95feb121a71b-kube-api-access-bwn94\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.116821 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-client-ca\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.117069 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49796a9-6bb5-4e1a-a203-95feb121a71b-serving-cert\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.117151 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwn94\" (UniqueName: \"kubernetes.io/projected/d49796a9-6bb5-4e1a-a203-95feb121a71b-kube-api-access-bwn94\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.117267 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-proxy-ca-bundles\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.117378 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-config\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.119248 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-client-ca\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.119635 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-proxy-ca-bundles\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.120297 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-config\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.128790 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49796a9-6bb5-4e1a-a203-95feb121a71b-serving-cert\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.148663 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwn94\" (UniqueName: \"kubernetes.io/projected/d49796a9-6bb5-4e1a-a203-95feb121a71b-kube-api-access-bwn94\") pod \"controller-manager-8594bddbbb-l7kxx\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.159139 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9455640b-d252-4198-b7df-a410bf7df2fe" path="/var/lib/kubelet/pods/9455640b-d252-4198-b7df-a410bf7df2fe/volumes" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.160466 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b9abce-8b6f-4346-b18c-2bfb7e5982eb" path="/var/lib/kubelet/pods/a0b9abce-8b6f-4346-b18c-2bfb7e5982eb/volumes" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.169786 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.328599 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-22x8m" Feb 17 15:57:35 crc kubenswrapper[4808]: I0217 15:57:35.482568 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8594bddbbb-l7kxx"] Feb 17 15:57:35 crc kubenswrapper[4808]: W0217 15:57:35.527341 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd49796a9_6bb5_4e1a_a203_95feb121a71b.slice/crio-cffb29ff0e4b3be981d1a59a5ed6094fc613f38be25d2865e1dc1af0b4d0785b WatchSource:0}: Error finding container cffb29ff0e4b3be981d1a59a5ed6094fc613f38be25d2865e1dc1af0b4d0785b: Status 404 returned error can't find the container with id cffb29ff0e4b3be981d1a59a5ed6094fc613f38be25d2865e1dc1af0b4d0785b Feb 17 15:57:36 crc kubenswrapper[4808]: I0217 15:57:36.265553 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" event={"ID":"d49796a9-6bb5-4e1a-a203-95feb121a71b","Type":"ContainerStarted","Data":"a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a"} Feb 17 15:57:36 crc kubenswrapper[4808]: I0217 15:57:36.266179 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" event={"ID":"d49796a9-6bb5-4e1a-a203-95feb121a71b","Type":"ContainerStarted","Data":"cffb29ff0e4b3be981d1a59a5ed6094fc613f38be25d2865e1dc1af0b4d0785b"} Feb 17 15:57:36 crc kubenswrapper[4808]: I0217 15:57:36.613615 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:57:36 crc kubenswrapper[4808]: I0217 15:57:36.660420 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 15:57:37 crc kubenswrapper[4808]: I0217 15:57:37.306337 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" podStartSLOduration=6.306318561 podStartE2EDuration="6.306318561s" podCreationTimestamp="2026-02-17 15:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:37.302822867 +0000 UTC m=+220.819181980" watchObservedRunningTime="2026-02-17 15:57:37.306318561 +0000 UTC m=+220.822677654" Feb 17 15:57:37 crc kubenswrapper[4808]: I0217 15:57:37.991788 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:57:38 crc kubenswrapper[4808]: I0217 15:57:38.054056 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 15:57:38 crc kubenswrapper[4808]: I0217 15:57:38.457755 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:57:38 crc kubenswrapper[4808]: I0217 15:57:38.516656 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:57:39 crc kubenswrapper[4808]: I0217 15:57:39.843636 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qhtfr"] Feb 17 15:57:40 crc kubenswrapper[4808]: I0217 15:57:40.301400 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qhtfr" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="registry-server" containerID="cri-o://ab5bf34de9e08f53fdffa63c8df6a1c54b35f7cc20e2c243fa6aac5b8aadc2b5" gracePeriod=2 Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.312164 4808 generic.go:334] "Generic (PLEG): container finished" podID="df27437e-6547-4705-bbe7-08a726639dbe" containerID="ab5bf34de9e08f53fdffa63c8df6a1c54b35f7cc20e2c243fa6aac5b8aadc2b5" exitCode=0 Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.312224 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhtfr" event={"ID":"df27437e-6547-4705-bbe7-08a726639dbe","Type":"ContainerDied","Data":"ab5bf34de9e08f53fdffa63c8df6a1c54b35f7cc20e2c243fa6aac5b8aadc2b5"} Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.801276 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.953014 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-catalog-content\") pod \"df27437e-6547-4705-bbe7-08a726639dbe\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.953102 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2255r\" (UniqueName: \"kubernetes.io/projected/df27437e-6547-4705-bbe7-08a726639dbe-kube-api-access-2255r\") pod \"df27437e-6547-4705-bbe7-08a726639dbe\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.953197 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-utilities\") pod \"df27437e-6547-4705-bbe7-08a726639dbe\" (UID: \"df27437e-6547-4705-bbe7-08a726639dbe\") " Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.955518 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-utilities" (OuterVolumeSpecName: "utilities") pod "df27437e-6547-4705-bbe7-08a726639dbe" (UID: "df27437e-6547-4705-bbe7-08a726639dbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:41 crc kubenswrapper[4808]: I0217 15:57:41.966841 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df27437e-6547-4705-bbe7-08a726639dbe-kube-api-access-2255r" (OuterVolumeSpecName: "kube-api-access-2255r") pod "df27437e-6547-4705-bbe7-08a726639dbe" (UID: "df27437e-6547-4705-bbe7-08a726639dbe"). InnerVolumeSpecName "kube-api-access-2255r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.055282 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2255r\" (UniqueName: \"kubernetes.io/projected/df27437e-6547-4705-bbe7-08a726639dbe-kube-api-access-2255r\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.055353 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.109152 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df27437e-6547-4705-bbe7-08a726639dbe" (UID: "df27437e-6547-4705-bbe7-08a726639dbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.156417 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df27437e-6547-4705-bbe7-08a726639dbe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.323860 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhtfr" event={"ID":"df27437e-6547-4705-bbe7-08a726639dbe","Type":"ContainerDied","Data":"1e19955de905028b28d439d0244d4c394edca2e38947d73637092653f1783480"} Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.323938 4808 scope.go:117] "RemoveContainer" containerID="ab5bf34de9e08f53fdffa63c8df6a1c54b35f7cc20e2c243fa6aac5b8aadc2b5" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.324339 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhtfr" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.353888 4808 scope.go:117] "RemoveContainer" containerID="1704dbc2b68e2b10e28ffd609ebd58eead43e61a6bd1ead6a6230baca3c1409e" Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.384472 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qhtfr"] Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.386875 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qhtfr"] Feb 17 15:57:42 crc kubenswrapper[4808]: I0217 15:57:42.400287 4808 scope.go:117] "RemoveContainer" containerID="7be6898f1f88ea761e64c2d8022df14c7db8627e97d2f080f379df7514b92a85" Feb 17 15:57:43 crc kubenswrapper[4808]: I0217 15:57:43.155969 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df27437e-6547-4705-bbe7-08a726639dbe" path="/var/lib/kubelet/pods/df27437e-6547-4705-bbe7-08a726639dbe/volumes" Feb 17 15:57:45 crc kubenswrapper[4808]: I0217 15:57:45.170837 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:45 crc kubenswrapper[4808]: I0217 15:57:45.180453 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:47 crc kubenswrapper[4808]: I0217 15:57:47.440103 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j6dgq"] Feb 17 15:57:51 crc kubenswrapper[4808]: I0217 15:57:51.395915 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8594bddbbb-l7kxx"] Feb 17 15:57:51 crc kubenswrapper[4808]: I0217 15:57:51.397096 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" podUID="d49796a9-6bb5-4e1a-a203-95feb121a71b" containerName="controller-manager" containerID="cri-o://a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a" gracePeriod=30 Feb 17 15:57:51 crc kubenswrapper[4808]: I0217 15:57:51.487823 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27"] Feb 17 15:57:51 crc kubenswrapper[4808]: I0217 15:57:51.488352 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" podUID="80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" containerName="route-controller-manager" containerID="cri-o://29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028" gracePeriod=30 Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.037284 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.045134 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvm6t\" (UniqueName: \"kubernetes.io/projected/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-kube-api-access-fvm6t\") pod \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.045235 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-serving-cert\") pod \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.045285 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-config\") pod \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.045361 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-client-ca\") pod \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\" (UID: \"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.046348 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-client-ca" (OuterVolumeSpecName: "client-ca") pod "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" (UID: "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.046403 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-config" (OuterVolumeSpecName: "config") pod "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" (UID: "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.052717 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" (UID: "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.057834 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-kube-api-access-fvm6t" (OuterVolumeSpecName: "kube-api-access-fvm6t") pod "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" (UID: "80b9b1d0-5520-48a9-b0b7-2c524d8ba56d"). InnerVolumeSpecName "kube-api-access-fvm6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.078237 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.146819 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49796a9-6bb5-4e1a-a203-95feb121a71b-serving-cert\") pod \"d49796a9-6bb5-4e1a-a203-95feb121a71b\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.147225 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwn94\" (UniqueName: \"kubernetes.io/projected/d49796a9-6bb5-4e1a-a203-95feb121a71b-kube-api-access-bwn94\") pod \"d49796a9-6bb5-4e1a-a203-95feb121a71b\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.147337 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-client-ca\") pod \"d49796a9-6bb5-4e1a-a203-95feb121a71b\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.147433 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-config\") pod \"d49796a9-6bb5-4e1a-a203-95feb121a71b\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.147512 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-proxy-ca-bundles\") pod \"d49796a9-6bb5-4e1a-a203-95feb121a71b\" (UID: \"d49796a9-6bb5-4e1a-a203-95feb121a71b\") " Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.147874 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvm6t\" (UniqueName: \"kubernetes.io/projected/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-kube-api-access-fvm6t\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.150273 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.150372 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.150456 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.149712 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-config" (OuterVolumeSpecName: "config") pod "d49796a9-6bb5-4e1a-a203-95feb121a71b" (UID: "d49796a9-6bb5-4e1a-a203-95feb121a71b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.149735 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-client-ca" (OuterVolumeSpecName: "client-ca") pod "d49796a9-6bb5-4e1a-a203-95feb121a71b" (UID: "d49796a9-6bb5-4e1a-a203-95feb121a71b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.150385 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d49796a9-6bb5-4e1a-a203-95feb121a71b" (UID: "d49796a9-6bb5-4e1a-a203-95feb121a71b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.153869 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49796a9-6bb5-4e1a-a203-95feb121a71b-kube-api-access-bwn94" (OuterVolumeSpecName: "kube-api-access-bwn94") pod "d49796a9-6bb5-4e1a-a203-95feb121a71b" (UID: "d49796a9-6bb5-4e1a-a203-95feb121a71b"). InnerVolumeSpecName "kube-api-access-bwn94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.154005 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49796a9-6bb5-4e1a-a203-95feb121a71b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d49796a9-6bb5-4e1a-a203-95feb121a71b" (UID: "d49796a9-6bb5-4e1a-a203-95feb121a71b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.251500 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwn94\" (UniqueName: \"kubernetes.io/projected/d49796a9-6bb5-4e1a-a203-95feb121a71b-kube-api-access-bwn94\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.251546 4808 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.251556 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-config\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.251567 4808 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d49796a9-6bb5-4e1a-a203-95feb121a71b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.251596 4808 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d49796a9-6bb5-4e1a-a203-95feb121a71b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.396687 4808 generic.go:334] "Generic (PLEG): container finished" podID="d49796a9-6bb5-4e1a-a203-95feb121a71b" containerID="a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a" exitCode=0 Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.396771 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" event={"ID":"d49796a9-6bb5-4e1a-a203-95feb121a71b","Type":"ContainerDied","Data":"a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a"} Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.396809 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" event={"ID":"d49796a9-6bb5-4e1a-a203-95feb121a71b","Type":"ContainerDied","Data":"cffb29ff0e4b3be981d1a59a5ed6094fc613f38be25d2865e1dc1af0b4d0785b"} Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.396833 4808 scope.go:117] "RemoveContainer" containerID="a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.397009 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8594bddbbb-l7kxx" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.400083 4808 generic.go:334] "Generic (PLEG): container finished" podID="80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" containerID="29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028" exitCode=0 Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.400135 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" event={"ID":"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d","Type":"ContainerDied","Data":"29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028"} Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.400168 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" event={"ID":"80b9b1d0-5520-48a9-b0b7-2c524d8ba56d","Type":"ContainerDied","Data":"70cc03d0f4a16d01a2409452eb79747f47e3f9835f1dc0806f2b12e87251321f"} Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.400179 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.417278 4808 scope.go:117] "RemoveContainer" containerID="a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a" Feb 17 15:57:52 crc kubenswrapper[4808]: E0217 15:57:52.418745 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a\": container with ID starting with a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a not found: ID does not exist" containerID="a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.418803 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a"} err="failed to get container status \"a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a\": rpc error: code = NotFound desc = could not find container \"a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a\": container with ID starting with a5babccb833f23718d9dc43aa54f545e2591a0f290ded633c32f90221497b15a not found: ID does not exist" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.418862 4808 scope.go:117] "RemoveContainer" containerID="29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.432761 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27"] Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.436882 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567cdd88c5-bmx27"] Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.436949 4808 scope.go:117] "RemoveContainer" containerID="29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028" Feb 17 15:57:52 crc kubenswrapper[4808]: E0217 15:57:52.437884 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028\": container with ID starting with 29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028 not found: ID does not exist" containerID="29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.437934 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028"} err="failed to get container status \"29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028\": rpc error: code = NotFound desc = could not find container \"29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028\": container with ID starting with 29b23adb7be4da7acebb0cc4e436ec05ecde2ceb12abd3e5503fc67622002028 not found: ID does not exist" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.451065 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8594bddbbb-l7kxx"] Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.453290 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8594bddbbb-l7kxx"] Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.826598 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6df9b784b8-zmkjg"] Feb 17 15:57:52 crc kubenswrapper[4808]: E0217 15:57:52.826915 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49796a9-6bb5-4e1a-a203-95feb121a71b" containerName="controller-manager" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.826955 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49796a9-6bb5-4e1a-a203-95feb121a71b" containerName="controller-manager" Feb 17 15:57:52 crc kubenswrapper[4808]: E0217 15:57:52.826974 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="extract-utilities" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.826981 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="extract-utilities" Feb 17 15:57:52 crc kubenswrapper[4808]: E0217 15:57:52.826992 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="extract-content" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.826999 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="extract-content" Feb 17 15:57:52 crc kubenswrapper[4808]: E0217 15:57:52.827011 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="registry-server" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.827017 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="registry-server" Feb 17 15:57:52 crc kubenswrapper[4808]: E0217 15:57:52.827027 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" containerName="route-controller-manager" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.827034 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" containerName="route-controller-manager" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.827148 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" containerName="route-controller-manager" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.827170 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49796a9-6bb5-4e1a-a203-95feb121a71b" containerName="controller-manager" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.827178 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="df27437e-6547-4705-bbe7-08a726639dbe" containerName="registry-server" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.827681 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.829639 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9"] Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.830509 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.834829 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.835090 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.835101 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.835295 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.840801 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.840824 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.840879 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.841767 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.841787 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.841770 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.841918 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.842041 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.846427 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6df9b784b8-zmkjg"] Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.850956 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.853728 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9"] Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.860565 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfjng\" (UniqueName: \"kubernetes.io/projected/8bcf84d4-b914-475a-be97-ecf8b121caf2-kube-api-access-cfjng\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.860954 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bcf84d4-b914-475a-be97-ecf8b121caf2-serving-cert\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.861168 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-config\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.861349 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-proxy-ca-bundles\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.861498 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bcf84d4-b914-475a-be97-ecf8b121caf2-client-ca\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.861626 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4k92\" (UniqueName: \"kubernetes.io/projected/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-kube-api-access-v4k92\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.861758 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-client-ca\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.862061 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bcf84d4-b914-475a-be97-ecf8b121caf2-config\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.862171 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-serving-cert\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963693 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4k92\" (UniqueName: \"kubernetes.io/projected/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-kube-api-access-v4k92\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963749 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bcf84d4-b914-475a-be97-ecf8b121caf2-client-ca\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963779 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-client-ca\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963812 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bcf84d4-b914-475a-be97-ecf8b121caf2-config\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963838 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-serving-cert\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963881 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfjng\" (UniqueName: \"kubernetes.io/projected/8bcf84d4-b914-475a-be97-ecf8b121caf2-kube-api-access-cfjng\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963914 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bcf84d4-b914-475a-be97-ecf8b121caf2-serving-cert\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.963953 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-config\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.964003 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-proxy-ca-bundles\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.965743 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bcf84d4-b914-475a-be97-ecf8b121caf2-client-ca\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.965743 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-client-ca\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.965905 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bcf84d4-b914-475a-be97-ecf8b121caf2-config\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.966024 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-config\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.967294 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-proxy-ca-bundles\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.970309 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bcf84d4-b914-475a-be97-ecf8b121caf2-serving-cert\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.970330 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-serving-cert\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.989547 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4k92\" (UniqueName: \"kubernetes.io/projected/5c8fa5f2-07b4-4b99-8448-3177c1e7d736-kube-api-access-v4k92\") pod \"controller-manager-6df9b784b8-zmkjg\" (UID: \"5c8fa5f2-07b4-4b99-8448-3177c1e7d736\") " pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:52 crc kubenswrapper[4808]: I0217 15:57:52.996195 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfjng\" (UniqueName: \"kubernetes.io/projected/8bcf84d4-b914-475a-be97-ecf8b121caf2-kube-api-access-cfjng\") pod \"route-controller-manager-56bc8c57dd-2hsb9\" (UID: \"8bcf84d4-b914-475a-be97-ecf8b121caf2\") " pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:53 crc kubenswrapper[4808]: I0217 15:57:53.143130 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:53 crc kubenswrapper[4808]: I0217 15:57:53.149877 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:53 crc kubenswrapper[4808]: I0217 15:57:53.152399 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80b9b1d0-5520-48a9-b0b7-2c524d8ba56d" path="/var/lib/kubelet/pods/80b9b1d0-5520-48a9-b0b7-2c524d8ba56d/volumes" Feb 17 15:57:53 crc kubenswrapper[4808]: I0217 15:57:53.152983 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49796a9-6bb5-4e1a-a203-95feb121a71b" path="/var/lib/kubelet/pods/d49796a9-6bb5-4e1a-a203-95feb121a71b/volumes" Feb 17 15:57:53 crc kubenswrapper[4808]: I0217 15:57:53.373400 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9"] Feb 17 15:57:53 crc kubenswrapper[4808]: I0217 15:57:53.428560 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" event={"ID":"8bcf84d4-b914-475a-be97-ecf8b121caf2","Type":"ContainerStarted","Data":"b9fa5710c504d58a802b1d136f2dbe3019c05f292e7764254fa2863aa9a29b94"} Feb 17 15:57:53 crc kubenswrapper[4808]: I0217 15:57:53.439067 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6df9b784b8-zmkjg"] Feb 17 15:57:53 crc kubenswrapper[4808]: W0217 15:57:53.454265 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c8fa5f2_07b4_4b99_8448_3177c1e7d736.slice/crio-d8092b7f0c9ca6d3d2bf3ee03aae06ec027a0e90e79137fa7d5589766588eb37 WatchSource:0}: Error finding container d8092b7f0c9ca6d3d2bf3ee03aae06ec027a0e90e79137fa7d5589766588eb37: Status 404 returned error can't find the container with id d8092b7f0c9ca6d3d2bf3ee03aae06ec027a0e90e79137fa7d5589766588eb37 Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.437521 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" event={"ID":"5c8fa5f2-07b4-4b99-8448-3177c1e7d736","Type":"ContainerStarted","Data":"2c188f25669dfc99284f724066294f22d46530bf5b5489d5f81017c230cd64a5"} Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.437964 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" event={"ID":"5c8fa5f2-07b4-4b99-8448-3177c1e7d736","Type":"ContainerStarted","Data":"d8092b7f0c9ca6d3d2bf3ee03aae06ec027a0e90e79137fa7d5589766588eb37"} Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.437986 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.442742 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" event={"ID":"8bcf84d4-b914-475a-be97-ecf8b121caf2","Type":"ContainerStarted","Data":"40e9597401850875091ed883ff41d7cb3516ede401e5423d5484ae46fc9a9ae8"} Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.443247 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.446487 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.463862 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" Feb 17 15:57:54 crc kubenswrapper[4808]: I0217 15:57:54.493048 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6df9b784b8-zmkjg" podStartSLOduration=3.4930241730000002 podStartE2EDuration="3.493024173s" podCreationTimestamp="2026-02-17 15:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:54.468548981 +0000 UTC m=+237.984908064" watchObservedRunningTime="2026-02-17 15:57:54.493024173 +0000 UTC m=+238.009383256" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.808722 4808 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.810592 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.854886 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56bc8c57dd-2hsb9" podStartSLOduration=10.854858472 podStartE2EDuration="10.854858472s" podCreationTimestamp="2026-02-17 15:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:57:54.520527516 +0000 UTC m=+238.036886589" watchObservedRunningTime="2026-02-17 15:58:01.854858472 +0000 UTC m=+245.371217555" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.856918 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.875674 4808 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.876051 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b" gracePeriod=15 Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.876100 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392" gracePeriod=15 Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.876223 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef" gracePeriod=15 Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.876169 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361" gracePeriod=15 Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.876249 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6" gracePeriod=15 Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878133 4808 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:58:01 crc kubenswrapper[4808]: E0217 15:58:01.878483 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878513 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 15:58:01 crc kubenswrapper[4808]: E0217 15:58:01.878526 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878533 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:58:01 crc kubenswrapper[4808]: E0217 15:58:01.878541 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878548 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 15:58:01 crc kubenswrapper[4808]: E0217 15:58:01.878565 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878601 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:01 crc kubenswrapper[4808]: E0217 15:58:01.878610 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878617 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 17 15:58:01 crc kubenswrapper[4808]: E0217 15:58:01.878631 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878637 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 15:58:01 crc kubenswrapper[4808]: E0217 15:58:01.878646 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878652 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878858 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878874 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878881 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878892 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878900 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.878907 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.911012 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.911198 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.911330 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.911462 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.911542 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.911658 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.911745 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:01 crc kubenswrapper[4808]: I0217 15:58:01.913108 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.015224 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.015328 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.015363 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016091 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016120 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016146 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016170 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016208 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016276 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.015799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016330 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.015982 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.015845 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016394 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016436 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.016458 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.145987 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:02 crc kubenswrapper[4808]: W0217 15:58:02.166275 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-9fa49b6f4e1e787f24ce9611632df8fda558e131cc56432bdbbe7931a33284c6 WatchSource:0}: Error finding container 9fa49b6f4e1e787f24ce9611632df8fda558e131cc56432bdbbe7931a33284c6: Status 404 returned error can't find the container with id 9fa49b6f4e1e787f24ce9611632df8fda558e131cc56432bdbbe7931a33284c6 Feb 17 15:58:02 crc kubenswrapper[4808]: E0217 15:58:02.171308 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189513e037c419d9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:58:02.169358809 +0000 UTC m=+245.685717922,LastTimestamp:2026-02-17 15:58:02.169358809 +0000 UTC m=+245.685717922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.504046 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.507276 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.508421 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef" exitCode=0 Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.508460 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392" exitCode=0 Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.508473 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361" exitCode=0 Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.508487 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6" exitCode=2 Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.508606 4808 scope.go:117] "RemoveContainer" containerID="68d1439ead0f87e8cde6925c6db2cfde8a7fe89c6e5afaf719868740138742df" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.511260 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026"} Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.511299 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"9fa49b6f4e1e787f24ce9611632df8fda558e131cc56432bdbbe7931a33284c6"} Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.512724 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.513040 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.513276 4808 generic.go:334] "Generic (PLEG): container finished" podID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" containerID="e259bf574b3e5b34a738dc5aa049367d026f2cbb8c3d1e0e5771dc0d329364c7" exitCode=0 Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.513314 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e","Type":"ContainerDied","Data":"e259bf574b3e5b34a738dc5aa049367d026f2cbb8c3d1e0e5771dc0d329364c7"} Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.513967 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.514500 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:02 crc kubenswrapper[4808]: I0217 15:58:02.515339 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:03 crc kubenswrapper[4808]: I0217 15:58:03.532435 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.037185 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.038204 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.038434 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.151624 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-var-lock\") pod \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.152014 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kubelet-dir\") pod \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.151911 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-var-lock" (OuterVolumeSpecName: "var-lock") pod "3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" (UID: "3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.152108 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kube-api-access\") pod \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\" (UID: \"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e\") " Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.152225 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" (UID: "3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.152625 4808 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.152649 4808 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.159605 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" (UID: "3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.254082 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.544506 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e","Type":"ContainerDied","Data":"c7a19d1c77507692cfde7142aa7d8a5076017b742b37e3a0c970625447aea416"} Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.544603 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.544617 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7a19d1c77507692cfde7142aa7d8a5076017b742b37e3a0c970625447aea416" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.550307 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.552987 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b" exitCode=0 Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.570347 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.571299 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.791912 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.793878 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.794983 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.795961 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.796617 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.864116 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.864265 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.864282 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.864430 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.864446 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.864533 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.864994 4808 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.865073 4808 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:04 crc kubenswrapper[4808]: I0217 15:58:04.865099 4808 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.154290 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.578168 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.580815 4808 scope.go:117] "RemoveContainer" containerID="77d0e25e29d8f9c5146809e50f50a20c537f5ddecea1b902928a94870b5d44ef" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.581193 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.582072 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.582504 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.583182 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.587259 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.587877 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.589749 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.601851 4808 scope.go:117] "RemoveContainer" containerID="715d799f5e1732f88175b90bad28450b9c5148e89bf47ac3e47f9585acf3b392" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.622159 4808 scope.go:117] "RemoveContainer" containerID="695c70a36ec8a626d22b6dc04fdaad77e3e1f27a035ce6f62b96afe1f2c29361" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.647697 4808 scope.go:117] "RemoveContainer" containerID="e2611c9a878eac336beeea637370ce7fe47a5a80a6f29002cb2fb79d4637a1c6" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.671211 4808 scope.go:117] "RemoveContainer" containerID="5fa3ef5d82c776e482d3da2d223d74423393c75b813707483fadca8cfbb5ed3b" Feb 17 15:58:05 crc kubenswrapper[4808]: I0217 15:58:05.688748 4808 scope.go:117] "RemoveContainer" containerID="d4d5b852095399ce44bfa0213284ed51719f947f8972a9ff85b63a0705760e42" Feb 17 15:58:07 crc kubenswrapper[4808]: I0217 15:58:07.153198 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: I0217 15:58:07.153998 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: I0217 15:58:07.154551 4808 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: E0217 15:58:07.615982 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: E0217 15:58:07.617462 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: E0217 15:58:07.618231 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: E0217 15:58:07.618670 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: E0217 15:58:07.619268 4808 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:07 crc kubenswrapper[4808]: I0217 15:58:07.619500 4808 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 17 15:58:07 crc kubenswrapper[4808]: E0217 15:58:07.620755 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="200ms" Feb 17 15:58:07 crc kubenswrapper[4808]: E0217 15:58:07.822511 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="400ms" Feb 17 15:58:08 crc kubenswrapper[4808]: E0217 15:58:08.207118 4808 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" volumeName="registry-storage" Feb 17 15:58:08 crc kubenswrapper[4808]: E0217 15:58:08.223466 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="800ms" Feb 17 15:58:09 crc kubenswrapper[4808]: E0217 15:58:09.025224 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="1.6s" Feb 17 15:58:10 crc kubenswrapper[4808]: E0217 15:58:10.189220 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.64:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189513e037c419d9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-17 15:58:02.169358809 +0000 UTC m=+245.685717922,LastTimestamp:2026-02-17 15:58:02.169358809 +0000 UTC m=+245.685717922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 17 15:58:10 crc kubenswrapper[4808]: E0217 15:58:10.626915 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="3.2s" Feb 17 15:58:12 crc kubenswrapper[4808]: I0217 15:58:12.489516 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" podUID="33978535-84b2-4def-af5a-d2819171e202" containerName="oauth-openshift" containerID="cri-o://a1afe1988306793eee4a68327c90d6c1337c9d7cc71b57771cb662e2ecc6eca8" gracePeriod=15 Feb 17 15:58:12 crc kubenswrapper[4808]: I0217 15:58:12.640319 4808 generic.go:334] "Generic (PLEG): container finished" podID="33978535-84b2-4def-af5a-d2819171e202" containerID="a1afe1988306793eee4a68327c90d6c1337c9d7cc71b57771cb662e2ecc6eca8" exitCode=0 Feb 17 15:58:12 crc kubenswrapper[4808]: I0217 15:58:12.640395 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" event={"ID":"33978535-84b2-4def-af5a-d2819171e202","Type":"ContainerDied","Data":"a1afe1988306793eee4a68327c90d6c1337c9d7cc71b57771cb662e2ecc6eca8"} Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.103550 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.105428 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.106757 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.107251 4808 status_manager.go:851] "Failed to get status for pod" podUID="33978535-84b2-4def-af5a-d2819171e202" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-j6dgq\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.295101 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-session\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.295330 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-trusted-ca-bundle\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.295440 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-audit-policies\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.295524 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw8ff\" (UniqueName: \"kubernetes.io/projected/33978535-84b2-4def-af5a-d2819171e202-kube-api-access-hw8ff\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.295701 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-router-certs\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.295865 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-error\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.295933 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33978535-84b2-4def-af5a-d2819171e202-audit-dir\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296454 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-cliconfig\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296488 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33978535-84b2-4def-af5a-d2819171e202-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296549 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-serving-cert\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296745 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-ocp-branding-template\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296820 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-login\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296865 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-provider-selection\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296905 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-service-ca\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.296998 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-idp-0-file-data\") pod \"33978535-84b2-4def-af5a-d2819171e202\" (UID: \"33978535-84b2-4def-af5a-d2819171e202\") " Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.297061 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.297120 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.297653 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.297683 4808 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.297706 4808 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/33978535-84b2-4def-af5a-d2819171e202-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.299466 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.299574 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.307067 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33978535-84b2-4def-af5a-d2819171e202-kube-api-access-hw8ff" (OuterVolumeSpecName: "kube-api-access-hw8ff") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "kube-api-access-hw8ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.311496 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.311983 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.312215 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.313196 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.314014 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.319000 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.319427 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.319713 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "33978535-84b2-4def-af5a-d2819171e202" (UID: "33978535-84b2-4def-af5a-d2819171e202"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399718 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399814 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399839 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399860 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399881 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399903 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399926 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399949 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399968 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.399986 4808 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/33978535-84b2-4def-af5a-d2819171e202-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.400005 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw8ff\" (UniqueName: \"kubernetes.io/projected/33978535-84b2-4def-af5a-d2819171e202-kube-api-access-hw8ff\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.652032 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" event={"ID":"33978535-84b2-4def-af5a-d2819171e202","Type":"ContainerDied","Data":"844de191c1be070d299b4c3076870b370dc0d9ba311dfdcbe654f429c1b19e41"} Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.652158 4808 scope.go:117] "RemoveContainer" containerID="a1afe1988306793eee4a68327c90d6c1337c9d7cc71b57771cb662e2ecc6eca8" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.652188 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.653190 4808 status_manager.go:851] "Failed to get status for pod" podUID="33978535-84b2-4def-af5a-d2819171e202" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-j6dgq\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.653981 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.654527 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.687025 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.687874 4808 status_manager.go:851] "Failed to get status for pod" podUID="33978535-84b2-4def-af5a-d2819171e202" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-j6dgq\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: I0217 15:58:13.688736 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:13 crc kubenswrapper[4808]: E0217 15:58:13.828836 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.64:6443: connect: connection refused" interval="6.4s" Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.677290 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.677374 4808 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1" exitCode=1 Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.677418 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1"} Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.678069 4808 scope.go:117] "RemoveContainer" containerID="8b00de586738e2d759aa971e2114def8fdfeb2a25fd72f482d75b9f46ea9a3d1" Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.678564 4808 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.679474 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.679731 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:15 crc kubenswrapper[4808]: I0217 15:58:15.680131 4808 status_manager.go:851] "Failed to get status for pod" podUID="33978535-84b2-4def-af5a-d2819171e202" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-j6dgq\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.145138 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.147776 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.148092 4808 status_manager.go:851] "Failed to get status for pod" podUID="33978535-84b2-4def-af5a-d2819171e202" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-j6dgq\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.148303 4808 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.148488 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.169823 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.169852 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:16 crc kubenswrapper[4808]: E0217 15:58:16.170140 4808 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.170698 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:16 crc kubenswrapper[4808]: W0217 15:58:16.226036 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-b99cbcec80af649623847a31b73f99c2879c20aa68786e69cb69c6a2b59eef9a WatchSource:0}: Error finding container b99cbcec80af649623847a31b73f99c2879c20aa68786e69cb69c6a2b59eef9a: Status 404 returned error can't find the container with id b99cbcec80af649623847a31b73f99c2879c20aa68786e69cb69c6a2b59eef9a Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.688485 4808 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="53b1f63ad4a1e73a7a5b4281325525eb23d2ef389b3a438a9ccc3a7cd68efb4c" exitCode=0 Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.688572 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"53b1f63ad4a1e73a7a5b4281325525eb23d2ef389b3a438a9ccc3a7cd68efb4c"} Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.688633 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b99cbcec80af649623847a31b73f99c2879c20aa68786e69cb69c6a2b59eef9a"} Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.688959 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.688976 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.689375 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: E0217 15:58:16.689501 4808 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.689896 4808 status_manager.go:851] "Failed to get status for pod" podUID="33978535-84b2-4def-af5a-d2819171e202" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-j6dgq\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.690690 4808 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.691469 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.693556 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.693681 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"55253c43280c77b6bc119cc1128dfc269bbd55032c2487fbd10280a86ea1efe4"} Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.694696 4808 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.695232 4808 status_manager.go:851] "Failed to get status for pod" podUID="33978535-84b2-4def-af5a-d2819171e202" pod="openshift-authentication/oauth-openshift-558db77b4-j6dgq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-j6dgq\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.695764 4808 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:16 crc kubenswrapper[4808]: I0217 15:58:16.696226 4808 status_manager.go:851] "Failed to get status for pod" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.64:6443: connect: connection refused" Feb 17 15:58:17 crc kubenswrapper[4808]: I0217 15:58:17.703829 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f1f986fc5c200a9569c35b566e4cfa04f7cfc4f9f9c2e396942948391360e9f7"} Feb 17 15:58:17 crc kubenswrapper[4808]: I0217 15:58:17.704143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"45bb6769049b70b73e54028c6538c1a4c6afe8cd3f1ea0ec050dc73c5f84b0f5"} Feb 17 15:58:17 crc kubenswrapper[4808]: I0217 15:58:17.704159 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"354e04967c324787ec4d636571c752b9a3dbcd280e56881c5956bf447f10c843"} Feb 17 15:58:18 crc kubenswrapper[4808]: I0217 15:58:18.714508 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c900cbde8d41dd6701813db391f8cd7ff105bdb00c8bad4c7b9052c074b82ec"} Feb 17 15:58:18 crc kubenswrapper[4808]: I0217 15:58:18.714550 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"39729087f2357d86ff966385e9c7822886245ecf7a94f94bda651a1d68c4040f"} Feb 17 15:58:18 crc kubenswrapper[4808]: I0217 15:58:18.714728 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:18 crc kubenswrapper[4808]: I0217 15:58:18.714896 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:18 crc kubenswrapper[4808]: I0217 15:58:18.714922 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:19 crc kubenswrapper[4808]: I0217 15:58:19.386706 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:58:19 crc kubenswrapper[4808]: I0217 15:58:19.394717 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:58:19 crc kubenswrapper[4808]: I0217 15:58:19.720368 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:58:21 crc kubenswrapper[4808]: I0217 15:58:21.171664 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:21 crc kubenswrapper[4808]: I0217 15:58:21.172247 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:21 crc kubenswrapper[4808]: I0217 15:58:21.179649 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:23 crc kubenswrapper[4808]: I0217 15:58:23.730233 4808 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:23 crc kubenswrapper[4808]: I0217 15:58:23.768391 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:23 crc kubenswrapper[4808]: I0217 15:58:23.768456 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:23 crc kubenswrapper[4808]: I0217 15:58:23.772129 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:23 crc kubenswrapper[4808]: I0217 15:58:23.774786 4808 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5e083d5b-b359-4f1b-b671-3700cc1ac9ad" Feb 17 15:58:24 crc kubenswrapper[4808]: I0217 15:58:24.775851 4808 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:24 crc kubenswrapper[4808]: I0217 15:58:24.776299 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="efd34c89-7350-4ce0-83d9-302614df88f7" Feb 17 15:58:27 crc kubenswrapper[4808]: I0217 15:58:27.183081 4808 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5e083d5b-b359-4f1b-b671-3700cc1ac9ad" Feb 17 15:58:30 crc kubenswrapper[4808]: I0217 15:58:30.477846 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 17 15:58:33 crc kubenswrapper[4808]: I0217 15:58:33.322939 4808 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 17 15:58:33 crc kubenswrapper[4808]: I0217 15:58:33.601468 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 17 15:58:33 crc kubenswrapper[4808]: I0217 15:58:33.762907 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 17 15:58:33 crc kubenswrapper[4808]: I0217 15:58:33.844626 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 17 15:58:33 crc kubenswrapper[4808]: I0217 15:58:33.950728 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 17 15:58:33 crc kubenswrapper[4808]: I0217 15:58:33.991406 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 17 15:58:34 crc kubenswrapper[4808]: I0217 15:58:34.055692 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 17 15:58:34 crc kubenswrapper[4808]: I0217 15:58:34.154051 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 17 15:58:34 crc kubenswrapper[4808]: I0217 15:58:34.640385 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 17 15:58:34 crc kubenswrapper[4808]: I0217 15:58:34.726867 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 17 15:58:34 crc kubenswrapper[4808]: I0217 15:58:34.765820 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 17 15:58:34 crc kubenswrapper[4808]: I0217 15:58:34.789223 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 17 15:58:34 crc kubenswrapper[4808]: I0217 15:58:34.846026 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.202984 4808 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.203661 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=34.203640604 podStartE2EDuration="34.203640604s" podCreationTimestamp="2026-02-17 15:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:58:23.68228368 +0000 UTC m=+267.198642763" watchObservedRunningTime="2026-02-17 15:58:35.203640604 +0000 UTC m=+278.719999687" Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.222257 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.222682 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-j6dgq","openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.222773 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.227821 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.244426 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.262001 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.26197315 podStartE2EDuration="12.26197315s" podCreationTimestamp="2026-02-17 15:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:58:35.256951415 +0000 UTC m=+278.773310518" watchObservedRunningTime="2026-02-17 15:58:35.26197315 +0000 UTC m=+278.778332253" Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.623198 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 17 15:58:35 crc kubenswrapper[4808]: I0217 15:58:35.848694 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 17 15:58:36 crc kubenswrapper[4808]: I0217 15:58:36.111053 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 17 15:58:36 crc kubenswrapper[4808]: I0217 15:58:36.347221 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 17 15:58:36 crc kubenswrapper[4808]: I0217 15:58:36.485778 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 17 15:58:36 crc kubenswrapper[4808]: I0217 15:58:36.691266 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:58:36 crc kubenswrapper[4808]: I0217 15:58:36.692229 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 17 15:58:36 crc kubenswrapper[4808]: I0217 15:58:36.758115 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 17 15:58:37 crc kubenswrapper[4808]: I0217 15:58:37.061172 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 17 15:58:37 crc kubenswrapper[4808]: I0217 15:58:37.157735 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33978535-84b2-4def-af5a-d2819171e202" path="/var/lib/kubelet/pods/33978535-84b2-4def-af5a-d2819171e202/volumes" Feb 17 15:58:37 crc kubenswrapper[4808]: I0217 15:58:37.187059 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 17 15:58:37 crc kubenswrapper[4808]: I0217 15:58:37.497243 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 17 15:58:37 crc kubenswrapper[4808]: I0217 15:58:37.564743 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 17 15:58:37 crc kubenswrapper[4808]: I0217 15:58:37.707108 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 17 15:58:37 crc kubenswrapper[4808]: I0217 15:58:37.801959 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.052118 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.151447 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.194544 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.224038 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.305179 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.329391 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.334664 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.341372 4808 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.373004 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.412403 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.511016 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.531291 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.590982 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.606026 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.609031 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.625137 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.651959 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.867046 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.883254 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.938722 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.952541 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 17 15:58:38 crc kubenswrapper[4808]: I0217 15:58:38.964046 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.010664 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.026387 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.093718 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.154851 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.227801 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.288894 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.371622 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.378480 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.386450 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.427550 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.541140 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.550112 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.552953 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.620008 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.629937 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.635021 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.789215 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.916684 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.956564 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 17 15:58:39 crc kubenswrapper[4808]: I0217 15:58:39.973980 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.054296 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.123000 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.128870 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.156441 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.177066 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.194984 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.207174 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.310326 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.351721 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.456310 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.485250 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.517622 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.666466 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.786453 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.845081 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.846992 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.848153 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.858910 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.954762 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.988447 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 17 15:58:40 crc kubenswrapper[4808]: I0217 15:58:40.999608 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.018061 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.062602 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.127430 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.136659 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.205303 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.217216 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.225655 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.344894 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.514131 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.583763 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.614410 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.627016 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.663561 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.693569 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.721318 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.757765 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.823903 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.827236 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 17 15:58:41 crc kubenswrapper[4808]: I0217 15:58:41.890237 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.044431 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.176274 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.178061 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.296877 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.326438 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.425400 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.451547 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.467220 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.478173 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.522106 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.542982 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.591864 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.633143 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.641239 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.642795 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.710050 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.770270 4808 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.839762 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 17 15:58:42 crc kubenswrapper[4808]: I0217 15:58:42.973862 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.041194 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.085770 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.160296 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.224859 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.285553 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.349619 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.575914 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.624822 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.637295 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.638837 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.665294 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.672673 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.818128 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.848303 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.848760 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.888355 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 17 15:58:43 crc kubenswrapper[4808]: I0217 15:58:43.926350 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.135737 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.191293 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.241238 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.247316 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.321917 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.343764 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.376156 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.395277 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.469871 4808 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.472348 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.502744 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.536535 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.605707 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.648506 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.670639 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.681389 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.696063 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.700464 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.714360 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.867896 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.922312 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 17 15:58:44 crc kubenswrapper[4808]: I0217 15:58:44.966399 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.009847 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.073704 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.131939 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.167059 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.176964 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.177710 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.178893 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.189829 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.189911 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.192048 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.216523 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.259406 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.279873 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7cf76d985f-jm4q8"] Feb 17 15:58:45 crc kubenswrapper[4808]: E0217 15:58:45.280489 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" containerName="installer" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.280529 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" containerName="installer" Feb 17 15:58:45 crc kubenswrapper[4808]: E0217 15:58:45.280559 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33978535-84b2-4def-af5a-d2819171e202" containerName="oauth-openshift" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.280572 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="33978535-84b2-4def-af5a-d2819171e202" containerName="oauth-openshift" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.280757 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="33978535-84b2-4def-af5a-d2819171e202" containerName="oauth-openshift" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.280788 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6a81ca-0d6e-48d2-a0a2-ada5fcb8b25e" containerName="installer" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.281408 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.283731 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.286402 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.286676 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.289066 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.290473 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.290579 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.291219 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.291336 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.291882 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.291869 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.295821 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.303342 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.317550 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.317748 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.319297 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.330124 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.333476 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7cf76d985f-jm4q8"] Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.335655 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.455987 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.483468 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.483717 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.483790 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.483869 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-error\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484044 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484135 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484199 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-audit-policies\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484246 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-session\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484302 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484391 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xv6x\" (UniqueName: \"kubernetes.io/projected/a005d347-0020-4aa0-a3b7-9d406bfa9612-kube-api-access-2xv6x\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484441 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-login\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484611 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484672 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a005d347-0020-4aa0-a3b7-9d406bfa9612-audit-dir\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.484696 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.565262 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.582386 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.585895 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.585955 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a005d347-0020-4aa0-a3b7-9d406bfa9612-audit-dir\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.585982 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586024 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586058 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586087 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586111 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-error\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586137 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586159 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586186 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-audit-policies\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586234 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-session\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586267 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586292 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xv6x\" (UniqueName: \"kubernetes.io/projected/a005d347-0020-4aa0-a3b7-9d406bfa9612-kube-api-access-2xv6x\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586317 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-login\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.587560 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.588215 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.588913 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-service-ca\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.586085 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a005d347-0020-4aa0-a3b7-9d406bfa9612-audit-dir\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.590654 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-audit-policies\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.590751 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.593738 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.593786 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-error\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.594478 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-login\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.595229 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-router-certs\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.595377 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.595509 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-session\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.596663 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.597743 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a005d347-0020-4aa0-a3b7-9d406bfa9612-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.625461 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xv6x\" (UniqueName: \"kubernetes.io/projected/a005d347-0020-4aa0-a3b7-9d406bfa9612-kube-api-access-2xv6x\") pod \"oauth-openshift-7cf76d985f-jm4q8\" (UID: \"a005d347-0020-4aa0-a3b7-9d406bfa9612\") " pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.625919 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.665923 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.669098 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.785148 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 17 15:58:45 crc kubenswrapper[4808]: I0217 15:58:45.938056 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.048064 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.114877 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.171632 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.275617 4808 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.275893 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026" gracePeriod=5 Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.308111 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.352931 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.421437 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.535338 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7cf76d985f-jm4q8"] Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.576311 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.732654 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.858439 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.939308 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" event={"ID":"a005d347-0020-4aa0-a3b7-9d406bfa9612","Type":"ContainerStarted","Data":"c4541dc422fccfc33b0d062375df1b8d0617039f10384c2072492d9dbc4efadd"} Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.939364 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" event={"ID":"a005d347-0020-4aa0-a3b7-9d406bfa9612","Type":"ContainerStarted","Data":"11568489746d2c1e860e39d7b3cf2bb2ced91a2b31b61a1b72c26ed6f6a983c9"} Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.939641 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.942091 4808 patch_prober.go:28] interesting pod/oauth-openshift-7cf76d985f-jm4q8 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" start-of-body= Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.942136 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" podUID="a005d347-0020-4aa0-a3b7-9d406bfa9612" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": dial tcp 10.217.0.65:6443: connect: connection refused" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.962888 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" podStartSLOduration=59.962852113 podStartE2EDuration="59.962852113s" podCreationTimestamp="2026-02-17 15:57:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 15:58:46.957772277 +0000 UTC m=+290.474131380" watchObservedRunningTime="2026-02-17 15:58:46.962852113 +0000 UTC m=+290.479211186" Feb 17 15:58:46 crc kubenswrapper[4808]: I0217 15:58:46.984278 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.108342 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.167085 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.400749 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.443110 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.471660 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.602945 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.758099 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.873141 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.874724 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.910415 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 17 15:58:47 crc kubenswrapper[4808]: I0217 15:58:47.948341 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7cf76d985f-jm4q8" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.042189 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.068171 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.078973 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.125631 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.184290 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.333787 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.463532 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.572822 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.676352 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 17 15:58:48 crc kubenswrapper[4808]: I0217 15:58:48.799526 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.131331 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.334678 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.392885 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.434199 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.464096 4808 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.521958 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.645833 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.655643 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 17 15:58:49 crc kubenswrapper[4808]: I0217 15:58:49.750577 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 17 15:58:50 crc kubenswrapper[4808]: I0217 15:58:50.055191 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 17 15:58:50 crc kubenswrapper[4808]: I0217 15:58:50.066073 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 17 15:58:50 crc kubenswrapper[4808]: I0217 15:58:50.859241 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.199829 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.868240 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.868324 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.974967 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.975038 4808 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026" exitCode=137 Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.975107 4808 scope.go:117] "RemoveContainer" containerID="1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.975193 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983100 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983176 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983293 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983378 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983446 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983469 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983465 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983520 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.983655 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.984030 4808 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.984072 4808 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.984096 4808 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.984123 4808 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:51 crc kubenswrapper[4808]: I0217 15:58:51.996685 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 15:58:52 crc kubenswrapper[4808]: I0217 15:58:52.003090 4808 scope.go:117] "RemoveContainer" containerID="1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026" Feb 17 15:58:52 crc kubenswrapper[4808]: E0217 15:58:52.004236 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026\": container with ID starting with 1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026 not found: ID does not exist" containerID="1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026" Feb 17 15:58:52 crc kubenswrapper[4808]: I0217 15:58:52.004302 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026"} err="failed to get container status \"1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026\": rpc error: code = NotFound desc = could not find container \"1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026\": container with ID starting with 1aad017c95d37d7e1d108001e119581b1379d3c0c63d28c65df4fdfd7a716026 not found: ID does not exist" Feb 17 15:58:52 crc kubenswrapper[4808]: I0217 15:58:52.085908 4808 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 17 15:58:53 crc kubenswrapper[4808]: I0217 15:58:53.163679 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 17 15:58:53 crc kubenswrapper[4808]: I0217 15:58:53.165077 4808 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 17 15:58:53 crc kubenswrapper[4808]: I0217 15:58:53.191077 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 15:58:53 crc kubenswrapper[4808]: I0217 15:58:53.191146 4808 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e0f80900-d24b-4479-bbe3-b422e8628d4b" Feb 17 15:58:53 crc kubenswrapper[4808]: I0217 15:58:53.198677 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 17 15:58:53 crc kubenswrapper[4808]: I0217 15:58:53.198779 4808 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e0f80900-d24b-4479-bbe3-b422e8628d4b" Feb 17 15:58:56 crc kubenswrapper[4808]: I0217 15:58:56.956870 4808 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 17 15:59:05 crc kubenswrapper[4808]: I0217 15:59:05.263274 4808 generic.go:334] "Generic (PLEG): container finished" podID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerID="1c4f11a7931bfb6c7e6734178fd2038fdd115a2788998f8ef169fbd7407cf6d2" exitCode=0 Feb 17 15:59:05 crc kubenswrapper[4808]: I0217 15:59:05.263368 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" event={"ID":"b0793347-d948-480b-b5a7-d0fed7e12b38","Type":"ContainerDied","Data":"1c4f11a7931bfb6c7e6734178fd2038fdd115a2788998f8ef169fbd7407cf6d2"} Feb 17 15:59:05 crc kubenswrapper[4808]: I0217 15:59:05.264917 4808 scope.go:117] "RemoveContainer" containerID="1c4f11a7931bfb6c7e6734178fd2038fdd115a2788998f8ef169fbd7407cf6d2" Feb 17 15:59:06 crc kubenswrapper[4808]: I0217 15:59:06.274270 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" event={"ID":"b0793347-d948-480b-b5a7-d0fed7e12b38","Type":"ContainerStarted","Data":"39d5ff5dd804706cac13ddc305146999917b8de3246e042798c68cde55b248ed"} Feb 17 15:59:06 crc kubenswrapper[4808]: I0217 15:59:06.275611 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:59:06 crc kubenswrapper[4808]: I0217 15:59:06.278513 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 15:59:17 crc kubenswrapper[4808]: I0217 15:59:17.867689 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 17 15:59:18 crc kubenswrapper[4808]: I0217 15:59:18.227337 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 17 15:59:21 crc kubenswrapper[4808]: I0217 15:59:21.592990 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:59:21 crc kubenswrapper[4808]: I0217 15:59:21.593701 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 15:59:51 crc kubenswrapper[4808]: I0217 15:59:51.592698 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 15:59:51 crc kubenswrapper[4808]: I0217 15:59:51.593739 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.203514 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq"] Feb 17 16:00:00 crc kubenswrapper[4808]: E0217 16:00:00.204656 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.204682 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.204918 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.205554 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.208897 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.214278 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.215999 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq"] Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.276331 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpnxp\" (UniqueName: \"kubernetes.io/projected/d231c3b2-ee81-488d-b526-77ab9c8a2822-kube-api-access-lpnxp\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.276621 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d231c3b2-ee81-488d-b526-77ab9c8a2822-config-volume\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.276678 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d231c3b2-ee81-488d-b526-77ab9c8a2822-secret-volume\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.378059 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpnxp\" (UniqueName: \"kubernetes.io/projected/d231c3b2-ee81-488d-b526-77ab9c8a2822-kube-api-access-lpnxp\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.378185 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d231c3b2-ee81-488d-b526-77ab9c8a2822-config-volume\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.378213 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d231c3b2-ee81-488d-b526-77ab9c8a2822-secret-volume\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.379601 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d231c3b2-ee81-488d-b526-77ab9c8a2822-config-volume\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.385773 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d231c3b2-ee81-488d-b526-77ab9c8a2822-secret-volume\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.398078 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpnxp\" (UniqueName: \"kubernetes.io/projected/d231c3b2-ee81-488d-b526-77ab9c8a2822-kube-api-access-lpnxp\") pod \"collect-profiles-29522400-gqxpq\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.434271 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vdjh6"] Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.435164 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.447709 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vdjh6"] Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480073 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-bound-sa-token\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480132 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-registry-tls\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480199 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68a94516-1d30-4e3c-ac74-900be5a9a652-trusted-ca\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480261 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480392 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68a94516-1d30-4e3c-ac74-900be5a9a652-registry-certificates\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480657 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68a94516-1d30-4e3c-ac74-900be5a9a652-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480704 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68a94516-1d30-4e3c-ac74-900be5a9a652-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.480749 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x47hm\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-kube-api-access-x47hm\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.506735 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.537560 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.582213 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68a94516-1d30-4e3c-ac74-900be5a9a652-registry-certificates\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.582254 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68a94516-1d30-4e3c-ac74-900be5a9a652-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.582279 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68a94516-1d30-4e3c-ac74-900be5a9a652-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.582298 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x47hm\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-kube-api-access-x47hm\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.582332 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-bound-sa-token\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.582350 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-registry-tls\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.582371 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68a94516-1d30-4e3c-ac74-900be5a9a652-trusted-ca\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.583312 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/68a94516-1d30-4e3c-ac74-900be5a9a652-ca-trust-extracted\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.583591 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/68a94516-1d30-4e3c-ac74-900be5a9a652-trusted-ca\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.583773 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/68a94516-1d30-4e3c-ac74-900be5a9a652-registry-certificates\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.588013 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/68a94516-1d30-4e3c-ac74-900be5a9a652-installation-pull-secrets\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.588119 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-registry-tls\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.598017 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-bound-sa-token\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.598800 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x47hm\" (UniqueName: \"kubernetes.io/projected/68a94516-1d30-4e3c-ac74-900be5a9a652-kube-api-access-x47hm\") pod \"image-registry-66df7c8f76-vdjh6\" (UID: \"68a94516-1d30-4e3c-ac74-900be5a9a652\") " pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.751998 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:00 crc kubenswrapper[4808]: I0217 16:00:00.753728 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq"] Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.015164 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-vdjh6"] Feb 17 16:00:01 crc kubenswrapper[4808]: W0217 16:00:01.024290 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68a94516_1d30_4e3c_ac74_900be5a9a652.slice/crio-6136f45b05ddfab5b40f52c17efab6dda0b618d0f5942bd07a0ce504ec2a6310 WatchSource:0}: Error finding container 6136f45b05ddfab5b40f52c17efab6dda0b618d0f5942bd07a0ce504ec2a6310: Status 404 returned error can't find the container with id 6136f45b05ddfab5b40f52c17efab6dda0b618d0f5942bd07a0ce504ec2a6310 Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.707043 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" event={"ID":"68a94516-1d30-4e3c-ac74-900be5a9a652","Type":"ContainerStarted","Data":"3bb1816d2059313cbb34f34bfd48513a99e6ba235b649edb11e010b90a5c62b6"} Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.707380 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" event={"ID":"68a94516-1d30-4e3c-ac74-900be5a9a652","Type":"ContainerStarted","Data":"6136f45b05ddfab5b40f52c17efab6dda0b618d0f5942bd07a0ce504ec2a6310"} Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.707397 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.708776 4808 generic.go:334] "Generic (PLEG): container finished" podID="d231c3b2-ee81-488d-b526-77ab9c8a2822" containerID="a5c43165b9e051b89a89100aebbe7b3cc4c01775c317fec65c06ca231b1fc493" exitCode=0 Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.708829 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" event={"ID":"d231c3b2-ee81-488d-b526-77ab9c8a2822","Type":"ContainerDied","Data":"a5c43165b9e051b89a89100aebbe7b3cc4c01775c317fec65c06ca231b1fc493"} Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.708862 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" event={"ID":"d231c3b2-ee81-488d-b526-77ab9c8a2822","Type":"ContainerStarted","Data":"f6804ef9baa91191e4b576d5f932378596dfb3ab3b8a9e55ede18e311e5b2d6f"} Feb 17 16:00:01 crc kubenswrapper[4808]: I0217 16:00:01.729829 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" podStartSLOduration=1.729813227 podStartE2EDuration="1.729813227s" podCreationTimestamp="2026-02-17 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:01.726627081 +0000 UTC m=+365.242986184" watchObservedRunningTime="2026-02-17 16:00:01.729813227 +0000 UTC m=+365.246172300" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.041916 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.220462 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpnxp\" (UniqueName: \"kubernetes.io/projected/d231c3b2-ee81-488d-b526-77ab9c8a2822-kube-api-access-lpnxp\") pod \"d231c3b2-ee81-488d-b526-77ab9c8a2822\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.220664 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d231c3b2-ee81-488d-b526-77ab9c8a2822-secret-volume\") pod \"d231c3b2-ee81-488d-b526-77ab9c8a2822\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.220720 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d231c3b2-ee81-488d-b526-77ab9c8a2822-config-volume\") pod \"d231c3b2-ee81-488d-b526-77ab9c8a2822\" (UID: \"d231c3b2-ee81-488d-b526-77ab9c8a2822\") " Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.222031 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d231c3b2-ee81-488d-b526-77ab9c8a2822-config-volume" (OuterVolumeSpecName: "config-volume") pod "d231c3b2-ee81-488d-b526-77ab9c8a2822" (UID: "d231c3b2-ee81-488d-b526-77ab9c8a2822"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.227246 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d231c3b2-ee81-488d-b526-77ab9c8a2822-kube-api-access-lpnxp" (OuterVolumeSpecName: "kube-api-access-lpnxp") pod "d231c3b2-ee81-488d-b526-77ab9c8a2822" (UID: "d231c3b2-ee81-488d-b526-77ab9c8a2822"). InnerVolumeSpecName "kube-api-access-lpnxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.233793 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d231c3b2-ee81-488d-b526-77ab9c8a2822-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d231c3b2-ee81-488d-b526-77ab9c8a2822" (UID: "d231c3b2-ee81-488d-b526-77ab9c8a2822"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.323593 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d231c3b2-ee81-488d-b526-77ab9c8a2822-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.323846 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d231c3b2-ee81-488d-b526-77ab9c8a2822-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.323911 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpnxp\" (UniqueName: \"kubernetes.io/projected/d231c3b2-ee81-488d-b526-77ab9c8a2822-kube-api-access-lpnxp\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.724764 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" event={"ID":"d231c3b2-ee81-488d-b526-77ab9c8a2822","Type":"ContainerDied","Data":"f6804ef9baa91191e4b576d5f932378596dfb3ab3b8a9e55ede18e311e5b2d6f"} Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.724807 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6804ef9baa91191e4b576d5f932378596dfb3ab3b8a9e55ede18e311e5b2d6f" Feb 17 16:00:03 crc kubenswrapper[4808]: I0217 16:00:03.724859 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq" Feb 17 16:00:20 crc kubenswrapper[4808]: I0217 16:00:20.759774 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-vdjh6" Feb 17 16:00:20 crc kubenswrapper[4808]: I0217 16:00:20.842538 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fmfh5"] Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.592639 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.592763 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.592839 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.596161 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77d27579afc79c7f9499a81b219b4983465c9c8999e7fd27d50b7990ea6072c1"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.596344 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://77d27579afc79c7f9499a81b219b4983465c9c8999e7fd27d50b7990ea6072c1" gracePeriod=600 Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.850079 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="77d27579afc79c7f9499a81b219b4983465c9c8999e7fd27d50b7990ea6072c1" exitCode=0 Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.850491 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"77d27579afc79c7f9499a81b219b4983465c9c8999e7fd27d50b7990ea6072c1"} Feb 17 16:00:21 crc kubenswrapper[4808]: I0217 16:00:21.850544 4808 scope.go:117] "RemoveContainer" containerID="383650c9e8169aa5621d731ebcbfdd1ace0491ad4e7931fca1f6b595e0e782b9" Feb 17 16:00:22 crc kubenswrapper[4808]: I0217 16:00:22.859150 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"088a965aa6da48d3335f0fd7b3ea4dc5ac44753ad3722fc3086c2312ec7c03db"} Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.728795 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hn7fn"] Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.729772 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hn7fn" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="registry-server" containerID="cri-o://ab1f4fdafb32d3b5b88908e1013b0deb27471f76f61f16612081d0858b9c0b31" gracePeriod=30 Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.751531 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-22x8m"] Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.751934 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-22x8m" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="registry-server" containerID="cri-o://5e0ccb5571695b0a11ced97259c836c8ed65e804c680e02618b7b777ab17bf5c" gracePeriod=30 Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.772910 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbr84"] Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.773324 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" containerID="cri-o://39d5ff5dd804706cac13ddc305146999917b8de3246e042798c68cde55b248ed" gracePeriod=30 Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.781520 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs597"] Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.781968 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cs597" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="registry-server" containerID="cri-o://1789b161d1d589d4f4b637bcd20330b171b3967cd4acb37da4ed2b0c3bffddf0" gracePeriod=30 Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.787300 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8jsrz"] Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.787686 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8jsrz" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="registry-server" containerID="cri-o://aa3fed03abacd35eb7bb1f3065835e28313c3e4962262338c33f30c7827d8852" gracePeriod=30 Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.792941 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v2wfq"] Feb 17 16:00:39 crc kubenswrapper[4808]: E0217 16:00:39.793470 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d231c3b2-ee81-488d-b526-77ab9c8a2822" containerName="collect-profiles" Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.793503 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d231c3b2-ee81-488d-b526-77ab9c8a2822" containerName="collect-profiles" Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.793749 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d231c3b2-ee81-488d-b526-77ab9c8a2822" containerName="collect-profiles" Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.794537 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.796400 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v2wfq"] Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.908382 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5dtb\" (UniqueName: \"kubernetes.io/projected/012287fd-dda3-4c7b-af1f-576ec2dc479b-kube-api-access-c5dtb\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.908448 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/012287fd-dda3-4c7b-af1f-576ec2dc479b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:39 crc kubenswrapper[4808]: I0217 16:00:39.908481 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/012287fd-dda3-4c7b-af1f-576ec2dc479b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.009198 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5dtb\" (UniqueName: \"kubernetes.io/projected/012287fd-dda3-4c7b-af1f-576ec2dc479b-kube-api-access-c5dtb\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.009247 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/012287fd-dda3-4c7b-af1f-576ec2dc479b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.009287 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/012287fd-dda3-4c7b-af1f-576ec2dc479b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.010894 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/012287fd-dda3-4c7b-af1f-576ec2dc479b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.017487 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/012287fd-dda3-4c7b-af1f-576ec2dc479b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.028802 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5dtb\" (UniqueName: \"kubernetes.io/projected/012287fd-dda3-4c7b-af1f-576ec2dc479b-kube-api-access-c5dtb\") pod \"marketplace-operator-79b997595-v2wfq\" (UID: \"012287fd-dda3-4c7b-af1f-576ec2dc479b\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.035364 4808 generic.go:334] "Generic (PLEG): container finished" podID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerID="aa3fed03abacd35eb7bb1f3065835e28313c3e4962262338c33f30c7827d8852" exitCode=0 Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.035425 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jsrz" event={"ID":"e22d34a8-92f6-4a2a-a0f5-e063c25afac1","Type":"ContainerDied","Data":"aa3fed03abacd35eb7bb1f3065835e28313c3e4962262338c33f30c7827d8852"} Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.040229 4808 generic.go:334] "Generic (PLEG): container finished" podID="543b2019-8399-411e-8e8b-45787b96873f" containerID="5e0ccb5571695b0a11ced97259c836c8ed65e804c680e02618b7b777ab17bf5c" exitCode=0 Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.040393 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22x8m" event={"ID":"543b2019-8399-411e-8e8b-45787b96873f","Type":"ContainerDied","Data":"5e0ccb5571695b0a11ced97259c836c8ed65e804c680e02618b7b777ab17bf5c"} Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.043514 4808 generic.go:334] "Generic (PLEG): container finished" podID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerID="39d5ff5dd804706cac13ddc305146999917b8de3246e042798c68cde55b248ed" exitCode=0 Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.043780 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" event={"ID":"b0793347-d948-480b-b5a7-d0fed7e12b38","Type":"ContainerDied","Data":"39d5ff5dd804706cac13ddc305146999917b8de3246e042798c68cde55b248ed"} Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.043817 4808 scope.go:117] "RemoveContainer" containerID="1c4f11a7931bfb6c7e6734178fd2038fdd115a2788998f8ef169fbd7407cf6d2" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.056220 4808 generic.go:334] "Generic (PLEG): container finished" podID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerID="ab1f4fdafb32d3b5b88908e1013b0deb27471f76f61f16612081d0858b9c0b31" exitCode=0 Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.056307 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hn7fn" event={"ID":"a1db3ff7-c43f-412e-ab72-3d592b6352b0","Type":"ContainerDied","Data":"ab1f4fdafb32d3b5b88908e1013b0deb27471f76f61f16612081d0858b9c0b31"} Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.059849 4808 generic.go:334] "Generic (PLEG): container finished" podID="48efd125-e3aa-444d-91a3-fa915be48b46" containerID="1789b161d1d589d4f4b637bcd20330b171b3967cd4acb37da4ed2b0c3bffddf0" exitCode=0 Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.059911 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs597" event={"ID":"48efd125-e3aa-444d-91a3-fa915be48b46","Type":"ContainerDied","Data":"1789b161d1d589d4f4b637bcd20330b171b3967cd4acb37da4ed2b0c3bffddf0"} Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.184076 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.197047 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22x8m" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.198448 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.198664 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.201748 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.247242 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316243 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/48efd125-e3aa-444d-91a3-fa915be48b46-kube-api-access-ptbxm\") pod \"48efd125-e3aa-444d-91a3-fa915be48b46\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316302 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-catalog-content\") pod \"48efd125-e3aa-444d-91a3-fa915be48b46\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316333 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h922n\" (UniqueName: \"kubernetes.io/projected/543b2019-8399-411e-8e8b-45787b96873f-kube-api-access-h922n\") pod \"543b2019-8399-411e-8e8b-45787b96873f\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316386 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-trusted-ca\") pod \"b0793347-d948-480b-b5a7-d0fed7e12b38\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316421 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-utilities\") pod \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316440 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-catalog-content\") pod \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316467 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp46n\" (UniqueName: \"kubernetes.io/projected/a1db3ff7-c43f-412e-ab72-3d592b6352b0-kube-api-access-sp46n\") pod \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\" (UID: \"a1db3ff7-c43f-412e-ab72-3d592b6352b0\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316490 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-utilities\") pod \"48efd125-e3aa-444d-91a3-fa915be48b46\" (UID: \"48efd125-e3aa-444d-91a3-fa915be48b46\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316516 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-operator-metrics\") pod \"b0793347-d948-480b-b5a7-d0fed7e12b38\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.316551 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdhmj\" (UniqueName: \"kubernetes.io/projected/b0793347-d948-480b-b5a7-d0fed7e12b38-kube-api-access-cdhmj\") pod \"b0793347-d948-480b-b5a7-d0fed7e12b38\" (UID: \"b0793347-d948-480b-b5a7-d0fed7e12b38\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.317487 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-utilities" (OuterVolumeSpecName: "utilities") pod "48efd125-e3aa-444d-91a3-fa915be48b46" (UID: "48efd125-e3aa-444d-91a3-fa915be48b46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.318089 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b0793347-d948-480b-b5a7-d0fed7e12b38" (UID: "b0793347-d948-480b-b5a7-d0fed7e12b38"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.318699 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-utilities" (OuterVolumeSpecName: "utilities") pod "a1db3ff7-c43f-412e-ab72-3d592b6352b0" (UID: "a1db3ff7-c43f-412e-ab72-3d592b6352b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.323408 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48efd125-e3aa-444d-91a3-fa915be48b46-kube-api-access-ptbxm" (OuterVolumeSpecName: "kube-api-access-ptbxm") pod "48efd125-e3aa-444d-91a3-fa915be48b46" (UID: "48efd125-e3aa-444d-91a3-fa915be48b46"). InnerVolumeSpecName "kube-api-access-ptbxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.323438 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1db3ff7-c43f-412e-ab72-3d592b6352b0-kube-api-access-sp46n" (OuterVolumeSpecName: "kube-api-access-sp46n") pod "a1db3ff7-c43f-412e-ab72-3d592b6352b0" (UID: "a1db3ff7-c43f-412e-ab72-3d592b6352b0"). InnerVolumeSpecName "kube-api-access-sp46n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.323495 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543b2019-8399-411e-8e8b-45787b96873f-kube-api-access-h922n" (OuterVolumeSpecName: "kube-api-access-h922n") pod "543b2019-8399-411e-8e8b-45787b96873f" (UID: "543b2019-8399-411e-8e8b-45787b96873f"). InnerVolumeSpecName "kube-api-access-h922n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.323714 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b0793347-d948-480b-b5a7-d0fed7e12b38" (UID: "b0793347-d948-480b-b5a7-d0fed7e12b38"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.324469 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-utilities\") pod \"543b2019-8399-411e-8e8b-45787b96873f\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.324493 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-catalog-content\") pod \"543b2019-8399-411e-8e8b-45787b96873f\" (UID: \"543b2019-8399-411e-8e8b-45787b96873f\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325000 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325013 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp46n\" (UniqueName: \"kubernetes.io/projected/a1db3ff7-c43f-412e-ab72-3d592b6352b0-kube-api-access-sp46n\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325025 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325035 4808 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325048 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptbxm\" (UniqueName: \"kubernetes.io/projected/48efd125-e3aa-444d-91a3-fa915be48b46-kube-api-access-ptbxm\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325057 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h922n\" (UniqueName: \"kubernetes.io/projected/543b2019-8399-411e-8e8b-45787b96873f-kube-api-access-h922n\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325066 4808 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b0793347-d948-480b-b5a7-d0fed7e12b38-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.325415 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0793347-d948-480b-b5a7-d0fed7e12b38-kube-api-access-cdhmj" (OuterVolumeSpecName: "kube-api-access-cdhmj") pod "b0793347-d948-480b-b5a7-d0fed7e12b38" (UID: "b0793347-d948-480b-b5a7-d0fed7e12b38"). InnerVolumeSpecName "kube-api-access-cdhmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.326379 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-utilities" (OuterVolumeSpecName: "utilities") pod "543b2019-8399-411e-8e8b-45787b96873f" (UID: "543b2019-8399-411e-8e8b-45787b96873f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.358315 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48efd125-e3aa-444d-91a3-fa915be48b46" (UID: "48efd125-e3aa-444d-91a3-fa915be48b46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.388487 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1db3ff7-c43f-412e-ab72-3d592b6352b0" (UID: "a1db3ff7-c43f-412e-ab72-3d592b6352b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.389460 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "543b2019-8399-411e-8e8b-45787b96873f" (UID: "543b2019-8399-411e-8e8b-45787b96873f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425451 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-utilities\") pod \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425528 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfwdc\" (UniqueName: \"kubernetes.io/projected/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-kube-api-access-bfwdc\") pod \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425604 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-catalog-content\") pod \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\" (UID: \"e22d34a8-92f6-4a2a-a0f5-e063c25afac1\") " Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425836 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48efd125-e3aa-444d-91a3-fa915be48b46-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425847 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1db3ff7-c43f-412e-ab72-3d592b6352b0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425856 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdhmj\" (UniqueName: \"kubernetes.io/projected/b0793347-d948-480b-b5a7-d0fed7e12b38-kube-api-access-cdhmj\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425866 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.425875 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/543b2019-8399-411e-8e8b-45787b96873f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.427681 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-utilities" (OuterVolumeSpecName: "utilities") pod "e22d34a8-92f6-4a2a-a0f5-e063c25afac1" (UID: "e22d34a8-92f6-4a2a-a0f5-e063c25afac1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.431229 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v2wfq"] Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.433760 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-kube-api-access-bfwdc" (OuterVolumeSpecName: "kube-api-access-bfwdc") pod "e22d34a8-92f6-4a2a-a0f5-e063c25afac1" (UID: "e22d34a8-92f6-4a2a-a0f5-e063c25afac1"). InnerVolumeSpecName "kube-api-access-bfwdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.527008 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.527233 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfwdc\" (UniqueName: \"kubernetes.io/projected/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-kube-api-access-bfwdc\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.578712 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e22d34a8-92f6-4a2a-a0f5-e063c25afac1" (UID: "e22d34a8-92f6-4a2a-a0f5-e063c25afac1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:40 crc kubenswrapper[4808]: I0217 16:00:40.628370 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e22d34a8-92f6-4a2a-a0f5-e063c25afac1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.067327 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" event={"ID":"b0793347-d948-480b-b5a7-d0fed7e12b38","Type":"ContainerDied","Data":"026165e1bd109fad794dffddae09d3e255a5318f60f94f71f305c72e7d4ac00e"} Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.067388 4808 scope.go:117] "RemoveContainer" containerID="39d5ff5dd804706cac13ddc305146999917b8de3246e042798c68cde55b248ed" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.067355 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sbr84" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.071741 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hn7fn" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.071757 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hn7fn" event={"ID":"a1db3ff7-c43f-412e-ab72-3d592b6352b0","Type":"ContainerDied","Data":"a45a3dcf61a1bf78b3c958287ad11993acb14303ea923a5033d56896c26a6ab3"} Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.074109 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" event={"ID":"012287fd-dda3-4c7b-af1f-576ec2dc479b","Type":"ContainerStarted","Data":"eaf65c679dacb3b04fb5e80de2028cbc11e3e31becac5bae377dfc8eaba3fedd"} Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.074157 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" event={"ID":"012287fd-dda3-4c7b-af1f-576ec2dc479b","Type":"ContainerStarted","Data":"175ef94fb6c0bf727103da307105f12e6f048b80375e60513ea8f41627457074"} Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.074184 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.077123 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cs597" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.077370 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cs597" event={"ID":"48efd125-e3aa-444d-91a3-fa915be48b46","Type":"ContainerDied","Data":"126635f0be61976c959568021a2dceebba5ec8a4421ba4bd848eb5998d5c720b"} Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.080111 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.087153 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jsrz" event={"ID":"e22d34a8-92f6-4a2a-a0f5-e063c25afac1","Type":"ContainerDied","Data":"74a889b6efdb919b84134965ae425faf36a72c4e4787bd3f59cfb8cf73e5c6b2"} Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.087220 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jsrz" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.094832 4808 scope.go:117] "RemoveContainer" containerID="ab1f4fdafb32d3b5b88908e1013b0deb27471f76f61f16612081d0858b9c0b31" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.101284 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-v2wfq" podStartSLOduration=2.101273891 podStartE2EDuration="2.101273891s" podCreationTimestamp="2026-02-17 16:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:00:41.10024706 +0000 UTC m=+404.616606153" watchObservedRunningTime="2026-02-17 16:00:41.101273891 +0000 UTC m=+404.617632964" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.119634 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hn7fn"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.127028 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22x8m" event={"ID":"543b2019-8399-411e-8e8b-45787b96873f","Type":"ContainerDied","Data":"88ab9dc080b2cadb5ff2951ac6094d56029248c1c148ac36b7e2a6167225bf7c"} Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.127188 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22x8m" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.128519 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hn7fn"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.137129 4808 scope.go:117] "RemoveContainer" containerID="56e991bdc7726b6c61887160d04bc51376a606946a766ba535be7f736adc85e3" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.158237 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" path="/var/lib/kubelet/pods/a1db3ff7-c43f-412e-ab72-3d592b6352b0/volumes" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.162692 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs597"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.162776 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cs597"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.163180 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8jsrz"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.189091 4808 scope.go:117] "RemoveContainer" containerID="b039d42ff08392f60bfd69fd494b2249c19f74796e443b4b4b8b827c93e49b48" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.213700 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8jsrz"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.217929 4808 scope.go:117] "RemoveContainer" containerID="1789b161d1d589d4f4b637bcd20330b171b3967cd4acb37da4ed2b0c3bffddf0" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.221127 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbr84"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.229359 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sbr84"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.233436 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-22x8m"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.234764 4808 scope.go:117] "RemoveContainer" containerID="2e27c972236a280162abd4cf4685ed84882d0bc3042df73d9e827a7ec611814e" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.239974 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-22x8m"] Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.254748 4808 scope.go:117] "RemoveContainer" containerID="2d27bebccfda20ebcc5c228a8194fccc9e95ec81e20baedc530a917fdd03e867" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.274010 4808 scope.go:117] "RemoveContainer" containerID="aa3fed03abacd35eb7bb1f3065835e28313c3e4962262338c33f30c7827d8852" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.294215 4808 scope.go:117] "RemoveContainer" containerID="616c2fdd03b2d5398b274f5ab3d43d25dcd8bacb210382e6b982a39d3da41dd3" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.311634 4808 scope.go:117] "RemoveContainer" containerID="3c46a03c8aecba377b0d1ea2fda18a067c3dd9d9e53d4229b5338fca0d7a98e0" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.323502 4808 scope.go:117] "RemoveContainer" containerID="5e0ccb5571695b0a11ced97259c836c8ed65e804c680e02618b7b777ab17bf5c" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.337027 4808 scope.go:117] "RemoveContainer" containerID="335aab9c25e746284f138cf133ee4f794236186f62c6450d29a99ecbca2622cc" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.353352 4808 scope.go:117] "RemoveContainer" containerID="a1b466a7276199cdb3d16661c145bd9226ea4df1371372728f98eec1641d1432" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.919165 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bbhct"] Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922042 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922068 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922080 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922089 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922102 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922114 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922127 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922135 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922143 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922150 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922160 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922167 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922180 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922187 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922197 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922204 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922212 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922219 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922227 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922258 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="extract-content" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922269 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922276 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922289 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922296 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922304 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922311 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="extract-utilities" Feb 17 16:00:41 crc kubenswrapper[4808]: E0217 16:00:41.922322 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922330 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922454 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922473 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922484 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922495 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="543b2019-8399-411e-8e8b-45787b96873f" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922504 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" containerName="marketplace-operator" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.922512 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1db3ff7-c43f-412e-ab72-3d592b6352b0" containerName="registry-server" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.923435 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.926926 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 17 16:00:41 crc kubenswrapper[4808]: I0217 16:00:41.936444 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bbhct"] Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.048300 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5011758e-a6e4-4491-8ac6-c0a8bcb50568-utilities\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.048421 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5011758e-a6e4-4491-8ac6-c0a8bcb50568-catalog-content\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.048450 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dq7r\" (UniqueName: \"kubernetes.io/projected/5011758e-a6e4-4491-8ac6-c0a8bcb50568-kube-api-access-8dq7r\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.116388 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lstjz"] Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.119465 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.122882 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.138212 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lstjz"] Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.149281 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5011758e-a6e4-4491-8ac6-c0a8bcb50568-catalog-content\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.149336 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dq7r\" (UniqueName: \"kubernetes.io/projected/5011758e-a6e4-4491-8ac6-c0a8bcb50568-kube-api-access-8dq7r\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.149413 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5011758e-a6e4-4491-8ac6-c0a8bcb50568-utilities\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.150176 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5011758e-a6e4-4491-8ac6-c0a8bcb50568-catalog-content\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.150221 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5011758e-a6e4-4491-8ac6-c0a8bcb50568-utilities\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.190566 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dq7r\" (UniqueName: \"kubernetes.io/projected/5011758e-a6e4-4491-8ac6-c0a8bcb50568-kube-api-access-8dq7r\") pod \"redhat-marketplace-bbhct\" (UID: \"5011758e-a6e4-4491-8ac6-c0a8bcb50568\") " pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.250254 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-utilities\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.250657 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-catalog-content\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.250882 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcnxj\" (UniqueName: \"kubernetes.io/projected/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-kube-api-access-jcnxj\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.266693 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.352938 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-catalog-content\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.353259 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcnxj\" (UniqueName: \"kubernetes.io/projected/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-kube-api-access-jcnxj\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.353295 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-utilities\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.353761 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-catalog-content\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.353864 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-utilities\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.377283 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcnxj\" (UniqueName: \"kubernetes.io/projected/bcdfcb0d-7a0d-4cee-a80f-f49f078bef37-kube-api-access-jcnxj\") pod \"redhat-operators-lstjz\" (UID: \"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37\") " pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.452498 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bbhct"] Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.452557 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:42 crc kubenswrapper[4808]: W0217 16:00:42.463386 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5011758e_a6e4_4491_8ac6_c0a8bcb50568.slice/crio-921fc7dd33aec55c58cf0c2b55ec6836878f7c0080bc6d184a05e8f04e953284 WatchSource:0}: Error finding container 921fc7dd33aec55c58cf0c2b55ec6836878f7c0080bc6d184a05e8f04e953284: Status 404 returned error can't find the container with id 921fc7dd33aec55c58cf0c2b55ec6836878f7c0080bc6d184a05e8f04e953284 Feb 17 16:00:42 crc kubenswrapper[4808]: I0217 16:00:42.661509 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lstjz"] Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.156671 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48efd125-e3aa-444d-91a3-fa915be48b46" path="/var/lib/kubelet/pods/48efd125-e3aa-444d-91a3-fa915be48b46/volumes" Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.158004 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="543b2019-8399-411e-8e8b-45787b96873f" path="/var/lib/kubelet/pods/543b2019-8399-411e-8e8b-45787b96873f/volumes" Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.159375 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0793347-d948-480b-b5a7-d0fed7e12b38" path="/var/lib/kubelet/pods/b0793347-d948-480b-b5a7-d0fed7e12b38/volumes" Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.160939 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e22d34a8-92f6-4a2a-a0f5-e063c25afac1" path="/var/lib/kubelet/pods/e22d34a8-92f6-4a2a-a0f5-e063c25afac1/volumes" Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.182021 4808 generic.go:334] "Generic (PLEG): container finished" podID="bcdfcb0d-7a0d-4cee-a80f-f49f078bef37" containerID="127179db16e67d9e8dcadf6734e266e67993b9f846ab820cb629d1308633756f" exitCode=0 Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.182125 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lstjz" event={"ID":"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37","Type":"ContainerDied","Data":"127179db16e67d9e8dcadf6734e266e67993b9f846ab820cb629d1308633756f"} Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.182164 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lstjz" event={"ID":"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37","Type":"ContainerStarted","Data":"d040bb42b76433ad539601aaec69ac52d503fa1b69b306ae00d824d1707f5b1a"} Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.185238 4808 generic.go:334] "Generic (PLEG): container finished" podID="5011758e-a6e4-4491-8ac6-c0a8bcb50568" containerID="c596161aeadceeb328bd43505150bab123a2f2a537b42718bb7e2a8b06f27acf" exitCode=0 Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.186320 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbhct" event={"ID":"5011758e-a6e4-4491-8ac6-c0a8bcb50568","Type":"ContainerDied","Data":"c596161aeadceeb328bd43505150bab123a2f2a537b42718bb7e2a8b06f27acf"} Feb 17 16:00:43 crc kubenswrapper[4808]: I0217 16:00:43.186394 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbhct" event={"ID":"5011758e-a6e4-4491-8ac6-c0a8bcb50568","Type":"ContainerStarted","Data":"921fc7dd33aec55c58cf0c2b55ec6836878f7c0080bc6d184a05e8f04e953284"} Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.192519 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lstjz" event={"ID":"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37","Type":"ContainerStarted","Data":"2daf81ecd3c16485533bbe62503f83d4e79a667aade15b55d10480d78481ba20"} Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.322990 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jqtsg"] Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.324552 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.326806 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.341162 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqtsg"] Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.501172 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-catalog-content\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.501240 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmplc\" (UniqueName: \"kubernetes.io/projected/7cdb188e-770b-4b77-8396-a2422be880a4-kube-api-access-gmplc\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.501276 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-utilities\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.513255 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-snf82"] Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.514826 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.521143 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.526201 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snf82"] Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.603149 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-catalog-content\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.603478 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmplc\" (UniqueName: \"kubernetes.io/projected/7cdb188e-770b-4b77-8396-a2422be880a4-kube-api-access-gmplc\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.603520 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-utilities\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.603929 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-catalog-content\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.604276 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-utilities\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.627432 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmplc\" (UniqueName: \"kubernetes.io/projected/7cdb188e-770b-4b77-8396-a2422be880a4-kube-api-access-gmplc\") pod \"certified-operators-jqtsg\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.645553 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.705121 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc7j5\" (UniqueName: \"kubernetes.io/projected/9b925660-1865-4603-8f8e-f21a1c342f63-kube-api-access-vc7j5\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.705672 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b925660-1865-4603-8f8e-f21a1c342f63-catalog-content\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.705726 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b925660-1865-4603-8f8e-f21a1c342f63-utilities\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.806197 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc7j5\" (UniqueName: \"kubernetes.io/projected/9b925660-1865-4603-8f8e-f21a1c342f63-kube-api-access-vc7j5\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.806255 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b925660-1865-4603-8f8e-f21a1c342f63-catalog-content\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.806295 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b925660-1865-4603-8f8e-f21a1c342f63-utilities\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.806830 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b925660-1865-4603-8f8e-f21a1c342f63-utilities\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.806888 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b925660-1865-4603-8f8e-f21a1c342f63-catalog-content\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.828952 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc7j5\" (UniqueName: \"kubernetes.io/projected/9b925660-1865-4603-8f8e-f21a1c342f63-kube-api-access-vc7j5\") pod \"community-operators-snf82\" (UID: \"9b925660-1865-4603-8f8e-f21a1c342f63\") " pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:44 crc kubenswrapper[4808]: I0217 16:00:44.835665 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.027461 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqtsg"] Feb 17 16:00:45 crc kubenswrapper[4808]: W0217 16:00:45.034441 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cdb188e_770b_4b77_8396_a2422be880a4.slice/crio-ef844668f5d5756ff7b1ef705f4ea124e4d7a7bd509d8e67479cb418a27a08a4 WatchSource:0}: Error finding container ef844668f5d5756ff7b1ef705f4ea124e4d7a7bd509d8e67479cb418a27a08a4: Status 404 returned error can't find the container with id ef844668f5d5756ff7b1ef705f4ea124e4d7a7bd509d8e67479cb418a27a08a4 Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.090179 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snf82"] Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.202962 4808 generic.go:334] "Generic (PLEG): container finished" podID="5011758e-a6e4-4491-8ac6-c0a8bcb50568" containerID="0f4854f446efe5957d7c81e19b5da8c7c806c0afafb344fde0ce3aaf5d49f886" exitCode=0 Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.203049 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbhct" event={"ID":"5011758e-a6e4-4491-8ac6-c0a8bcb50568","Type":"ContainerDied","Data":"0f4854f446efe5957d7c81e19b5da8c7c806c0afafb344fde0ce3aaf5d49f886"} Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.208742 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snf82" event={"ID":"9b925660-1865-4603-8f8e-f21a1c342f63","Type":"ContainerStarted","Data":"63b36f7d6da84b9b4455c506dbd13856e075e7b3b6c650a39ebcaf9267f7ceaf"} Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.213241 4808 generic.go:334] "Generic (PLEG): container finished" podID="bcdfcb0d-7a0d-4cee-a80f-f49f078bef37" containerID="2daf81ecd3c16485533bbe62503f83d4e79a667aade15b55d10480d78481ba20" exitCode=0 Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.213304 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lstjz" event={"ID":"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37","Type":"ContainerDied","Data":"2daf81ecd3c16485533bbe62503f83d4e79a667aade15b55d10480d78481ba20"} Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.215905 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerStarted","Data":"47a3ebdb89ce68c6b02152046e0104b05bde9ba746322e9e754da8447f0e2b5b"} Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.215953 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerStarted","Data":"ef844668f5d5756ff7b1ef705f4ea124e4d7a7bd509d8e67479cb418a27a08a4"} Feb 17 16:00:45 crc kubenswrapper[4808]: I0217 16:00:45.896725 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" podUID="ddc3801d-3513-460c-a719-ed9dc92697e7" containerName="registry" containerID="cri-o://2c6abeefd28c47d49cee179f808d4b10aff7311be498ba875ef344c21dc775da" gracePeriod=30 Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.228693 4808 generic.go:334] "Generic (PLEG): container finished" podID="9b925660-1865-4603-8f8e-f21a1c342f63" containerID="d0350e5a6a6ac994336a37c313b488f12ab8fc28005e7c91cfab28eb02b3774d" exitCode=0 Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.228819 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snf82" event={"ID":"9b925660-1865-4603-8f8e-f21a1c342f63","Type":"ContainerDied","Data":"d0350e5a6a6ac994336a37c313b488f12ab8fc28005e7c91cfab28eb02b3774d"} Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.236229 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lstjz" event={"ID":"bcdfcb0d-7a0d-4cee-a80f-f49f078bef37","Type":"ContainerStarted","Data":"32d9978b151ae50bdecbc21ec640df93bbd6346bdfdfcc6a9ac2cc3e03f96622"} Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.238854 4808 generic.go:334] "Generic (PLEG): container finished" podID="7cdb188e-770b-4b77-8396-a2422be880a4" containerID="47a3ebdb89ce68c6b02152046e0104b05bde9ba746322e9e754da8447f0e2b5b" exitCode=0 Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.240344 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerDied","Data":"47a3ebdb89ce68c6b02152046e0104b05bde9ba746322e9e754da8447f0e2b5b"} Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.258143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bbhct" event={"ID":"5011758e-a6e4-4491-8ac6-c0a8bcb50568","Type":"ContainerStarted","Data":"fdf09729f009f935cf68d8269108df5e5ec401e39d9ce2ba72a6e317f7d6227f"} Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.260954 4808 generic.go:334] "Generic (PLEG): container finished" podID="ddc3801d-3513-460c-a719-ed9dc92697e7" containerID="2c6abeefd28c47d49cee179f808d4b10aff7311be498ba875ef344c21dc775da" exitCode=0 Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.260984 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" event={"ID":"ddc3801d-3513-460c-a719-ed9dc92697e7","Type":"ContainerDied","Data":"2c6abeefd28c47d49cee179f808d4b10aff7311be498ba875ef344c21dc775da"} Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.298302 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lstjz" podStartSLOduration=1.779485342 podStartE2EDuration="4.298277495s" podCreationTimestamp="2026-02-17 16:00:42 +0000 UTC" firstStartedPulling="2026-02-17 16:00:43.183494145 +0000 UTC m=+406.699853218" lastFinishedPulling="2026-02-17 16:00:45.702286298 +0000 UTC m=+409.218645371" observedRunningTime="2026-02-17 16:00:46.276484382 +0000 UTC m=+409.792843455" watchObservedRunningTime="2026-02-17 16:00:46.298277495 +0000 UTC m=+409.814636568" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.325550 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bbhct" podStartSLOduration=2.79889054 podStartE2EDuration="5.325530612s" podCreationTimestamp="2026-02-17 16:00:41 +0000 UTC" firstStartedPulling="2026-02-17 16:00:43.187532287 +0000 UTC m=+406.703891360" lastFinishedPulling="2026-02-17 16:00:45.714172359 +0000 UTC m=+409.230531432" observedRunningTime="2026-02-17 16:00:46.324310416 +0000 UTC m=+409.840669489" watchObservedRunningTime="2026-02-17 16:00:46.325530612 +0000 UTC m=+409.841889695" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.340256 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.431921 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l78nd\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-kube-api-access-l78nd\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.431987 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-bound-sa-token\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.432084 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3801d-3513-460c-a719-ed9dc92697e7-installation-pull-secrets\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.432133 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ddc3801d-3513-460c-a719-ed9dc92697e7-ca-trust-extracted\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.432176 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-trusted-ca\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.432344 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.432376 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-tls\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.432418 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-certificates\") pod \"ddc3801d-3513-460c-a719-ed9dc92697e7\" (UID: \"ddc3801d-3513-460c-a719-ed9dc92697e7\") " Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.433533 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.433627 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.439331 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.445939 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-kube-api-access-l78nd" (OuterVolumeSpecName: "kube-api-access-l78nd") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "kube-api-access-l78nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.446479 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.446715 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.451301 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddc3801d-3513-460c-a719-ed9dc92697e7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.455677 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddc3801d-3513-460c-a719-ed9dc92697e7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ddc3801d-3513-460c-a719-ed9dc92697e7" (UID: "ddc3801d-3513-460c-a719-ed9dc92697e7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.533818 4808 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.533860 4808 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.533872 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l78nd\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-kube-api-access-l78nd\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.533881 4808 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ddc3801d-3513-460c-a719-ed9dc92697e7-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.533891 4808 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ddc3801d-3513-460c-a719-ed9dc92697e7-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.533901 4808 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ddc3801d-3513-460c-a719-ed9dc92697e7-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:46 crc kubenswrapper[4808]: I0217 16:00:46.533911 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ddc3801d-3513-460c-a719-ed9dc92697e7-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:00:47 crc kubenswrapper[4808]: I0217 16:00:47.270941 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" event={"ID":"ddc3801d-3513-460c-a719-ed9dc92697e7","Type":"ContainerDied","Data":"6e3f1081b00b18d9f343d94a49f4eb8fd3475f6dc82e8e6676483c99ff105dda"} Feb 17 16:00:47 crc kubenswrapper[4808]: I0217 16:00:47.270962 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fmfh5" Feb 17 16:00:47 crc kubenswrapper[4808]: I0217 16:00:47.271487 4808 scope.go:117] "RemoveContainer" containerID="2c6abeefd28c47d49cee179f808d4b10aff7311be498ba875ef344c21dc775da" Feb 17 16:00:47 crc kubenswrapper[4808]: I0217 16:00:47.281302 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerStarted","Data":"90673874b32c0b13b6c696df3d7ec418349328c7a6d184134dcf0c00617dcaee"} Feb 17 16:00:47 crc kubenswrapper[4808]: I0217 16:00:47.337865 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fmfh5"] Feb 17 16:00:47 crc kubenswrapper[4808]: I0217 16:00:47.341765 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fmfh5"] Feb 17 16:00:48 crc kubenswrapper[4808]: I0217 16:00:48.287205 4808 generic.go:334] "Generic (PLEG): container finished" podID="7cdb188e-770b-4b77-8396-a2422be880a4" containerID="90673874b32c0b13b6c696df3d7ec418349328c7a6d184134dcf0c00617dcaee" exitCode=0 Feb 17 16:00:48 crc kubenswrapper[4808]: I0217 16:00:48.287258 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerDied","Data":"90673874b32c0b13b6c696df3d7ec418349328c7a6d184134dcf0c00617dcaee"} Feb 17 16:00:48 crc kubenswrapper[4808]: I0217 16:00:48.301474 4808 generic.go:334] "Generic (PLEG): container finished" podID="9b925660-1865-4603-8f8e-f21a1c342f63" containerID="52e264425fb80accc6368ccf3807bac64ef6f8e36953f6e0db1eddd3a570a652" exitCode=0 Feb 17 16:00:48 crc kubenswrapper[4808]: I0217 16:00:48.301688 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snf82" event={"ID":"9b925660-1865-4603-8f8e-f21a1c342f63","Type":"ContainerDied","Data":"52e264425fb80accc6368ccf3807bac64ef6f8e36953f6e0db1eddd3a570a652"} Feb 17 16:00:49 crc kubenswrapper[4808]: I0217 16:00:49.157047 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddc3801d-3513-460c-a719-ed9dc92697e7" path="/var/lib/kubelet/pods/ddc3801d-3513-460c-a719-ed9dc92697e7/volumes" Feb 17 16:00:49 crc kubenswrapper[4808]: I0217 16:00:49.308518 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerStarted","Data":"2d9bae86441156ea0978a61aa55e3e05d2e584ec61842c859e61158d7e3209d1"} Feb 17 16:00:49 crc kubenswrapper[4808]: I0217 16:00:49.310203 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snf82" event={"ID":"9b925660-1865-4603-8f8e-f21a1c342f63","Type":"ContainerStarted","Data":"55b66084d7c88b24753d4f326e3d7444972e56a90179b952814cb3b23af1b396"} Feb 17 16:00:49 crc kubenswrapper[4808]: I0217 16:00:49.339262 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jqtsg" podStartSLOduration=2.880396288 podStartE2EDuration="5.33924663s" podCreationTimestamp="2026-02-17 16:00:44 +0000 UTC" firstStartedPulling="2026-02-17 16:00:46.242196229 +0000 UTC m=+409.758555302" lastFinishedPulling="2026-02-17 16:00:48.701046561 +0000 UTC m=+412.217405644" observedRunningTime="2026-02-17 16:00:49.334794435 +0000 UTC m=+412.851153518" watchObservedRunningTime="2026-02-17 16:00:49.33924663 +0000 UTC m=+412.855605703" Feb 17 16:00:49 crc kubenswrapper[4808]: I0217 16:00:49.351773 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-snf82" podStartSLOduration=2.776472529 podStartE2EDuration="5.35175468s" podCreationTimestamp="2026-02-17 16:00:44 +0000 UTC" firstStartedPulling="2026-02-17 16:00:46.232051011 +0000 UTC m=+409.748410084" lastFinishedPulling="2026-02-17 16:00:48.807333162 +0000 UTC m=+412.323692235" observedRunningTime="2026-02-17 16:00:49.350176403 +0000 UTC m=+412.866535496" watchObservedRunningTime="2026-02-17 16:00:49.35175468 +0000 UTC m=+412.868113763" Feb 17 16:00:52 crc kubenswrapper[4808]: I0217 16:00:52.267130 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:52 crc kubenswrapper[4808]: I0217 16:00:52.267740 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:52 crc kubenswrapper[4808]: I0217 16:00:52.326531 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:52 crc kubenswrapper[4808]: I0217 16:00:52.404123 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bbhct" Feb 17 16:00:52 crc kubenswrapper[4808]: I0217 16:00:52.453554 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:52 crc kubenswrapper[4808]: I0217 16:00:52.454762 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:52 crc kubenswrapper[4808]: I0217 16:00:52.503689 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:53 crc kubenswrapper[4808]: I0217 16:00:53.395441 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lstjz" Feb 17 16:00:54 crc kubenswrapper[4808]: I0217 16:00:54.646467 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:54 crc kubenswrapper[4808]: I0217 16:00:54.647157 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:54 crc kubenswrapper[4808]: I0217 16:00:54.683366 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:54 crc kubenswrapper[4808]: I0217 16:00:54.836615 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:54 crc kubenswrapper[4808]: I0217 16:00:54.836680 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:54 crc kubenswrapper[4808]: I0217 16:00:54.876093 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-snf82" Feb 17 16:00:55 crc kubenswrapper[4808]: I0217 16:00:55.381208 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:00:55 crc kubenswrapper[4808]: I0217 16:00:55.384083 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-snf82" Feb 17 16:02:21 crc kubenswrapper[4808]: I0217 16:02:21.592835 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:02:21 crc kubenswrapper[4808]: I0217 16:02:21.593633 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:02:51 crc kubenswrapper[4808]: I0217 16:02:51.591915 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:02:51 crc kubenswrapper[4808]: I0217 16:02:51.592706 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:03:21 crc kubenswrapper[4808]: I0217 16:03:21.592278 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:03:21 crc kubenswrapper[4808]: I0217 16:03:21.592972 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:03:21 crc kubenswrapper[4808]: I0217 16:03:21.593034 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:03:21 crc kubenswrapper[4808]: I0217 16:03:21.593877 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"088a965aa6da48d3335f0fd7b3ea4dc5ac44753ad3722fc3086c2312ec7c03db"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:03:21 crc kubenswrapper[4808]: I0217 16:03:21.594007 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://088a965aa6da48d3335f0fd7b3ea4dc5ac44753ad3722fc3086c2312ec7c03db" gracePeriod=600 Feb 17 16:03:22 crc kubenswrapper[4808]: I0217 16:03:22.403385 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="088a965aa6da48d3335f0fd7b3ea4dc5ac44753ad3722fc3086c2312ec7c03db" exitCode=0 Feb 17 16:03:22 crc kubenswrapper[4808]: I0217 16:03:22.403464 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"088a965aa6da48d3335f0fd7b3ea4dc5ac44753ad3722fc3086c2312ec7c03db"} Feb 17 16:03:22 crc kubenswrapper[4808]: I0217 16:03:22.404473 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"51dff3d704e9a98a9fc5f37394f1d0157cc8cebcc4571b1aa78c7b9262eeb36c"} Feb 17 16:03:22 crc kubenswrapper[4808]: I0217 16:03:22.404518 4808 scope.go:117] "RemoveContainer" containerID="77d27579afc79c7f9499a81b219b4983465c9c8999e7fd27d50b7990ea6072c1" Feb 17 16:05:21 crc kubenswrapper[4808]: I0217 16:05:21.593325 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:05:21 crc kubenswrapper[4808]: I0217 16:05:21.594104 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.389141 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm"] Feb 17 16:05:39 crc kubenswrapper[4808]: E0217 16:05:39.390171 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc3801d-3513-460c-a719-ed9dc92697e7" containerName="registry" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.390191 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc3801d-3513-460c-a719-ed9dc92697e7" containerName="registry" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.390338 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddc3801d-3513-460c-a719-ed9dc92697e7" containerName="registry" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.391349 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.393869 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.406270 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm"] Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.491273 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4vtz\" (UniqueName: \"kubernetes.io/projected/11d9feea-2c1d-48e4-9cf4-bde172f9faea-kube-api-access-x4vtz\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.491348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.491410 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.593282 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.593419 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4vtz\" (UniqueName: \"kubernetes.io/projected/11d9feea-2c1d-48e4-9cf4-bde172f9faea-kube-api-access-x4vtz\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.593701 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.593841 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.594032 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.626216 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4vtz\" (UniqueName: \"kubernetes.io/projected/11d9feea-2c1d-48e4-9cf4-bde172f9faea-kube-api-access-x4vtz\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.719019 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:39 crc kubenswrapper[4808]: I0217 16:05:39.975071 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm"] Feb 17 16:05:40 crc kubenswrapper[4808]: I0217 16:05:40.424337 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" event={"ID":"11d9feea-2c1d-48e4-9cf4-bde172f9faea","Type":"ContainerStarted","Data":"c1927813e5dee42974ad95f87121936cfcb59e339c6af53fbdcd594c1a9d8a41"} Feb 17 16:05:40 crc kubenswrapper[4808]: I0217 16:05:40.424421 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" event={"ID":"11d9feea-2c1d-48e4-9cf4-bde172f9faea","Type":"ContainerStarted","Data":"1cf44481943a899439fc15a8de81c91b62c9ca1868a444f67bef4eb79a7c7f80"} Feb 17 16:05:41 crc kubenswrapper[4808]: I0217 16:05:41.432154 4808 generic.go:334] "Generic (PLEG): container finished" podID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerID="c1927813e5dee42974ad95f87121936cfcb59e339c6af53fbdcd594c1a9d8a41" exitCode=0 Feb 17 16:05:41 crc kubenswrapper[4808]: I0217 16:05:41.432594 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" event={"ID":"11d9feea-2c1d-48e4-9cf4-bde172f9faea","Type":"ContainerDied","Data":"c1927813e5dee42974ad95f87121936cfcb59e339c6af53fbdcd594c1a9d8a41"} Feb 17 16:05:41 crc kubenswrapper[4808]: I0217 16:05:41.438162 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:05:43 crc kubenswrapper[4808]: I0217 16:05:43.450095 4808 generic.go:334] "Generic (PLEG): container finished" podID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerID="495964b7fe8320dfa69f3d266112f71b2d4ec51d673ac680479f0aac4c456279" exitCode=0 Feb 17 16:05:43 crc kubenswrapper[4808]: I0217 16:05:43.450212 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" event={"ID":"11d9feea-2c1d-48e4-9cf4-bde172f9faea","Type":"ContainerDied","Data":"495964b7fe8320dfa69f3d266112f71b2d4ec51d673ac680479f0aac4c456279"} Feb 17 16:05:44 crc kubenswrapper[4808]: I0217 16:05:44.467175 4808 generic.go:334] "Generic (PLEG): container finished" podID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerID="b98b03db716e9694fdfd21b758179be84383bdb2aafaecd25d545be5dc8eaedd" exitCode=0 Feb 17 16:05:44 crc kubenswrapper[4808]: I0217 16:05:44.467233 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" event={"ID":"11d9feea-2c1d-48e4-9cf4-bde172f9faea","Type":"ContainerDied","Data":"b98b03db716e9694fdfd21b758179be84383bdb2aafaecd25d545be5dc8eaedd"} Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.743391 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.894785 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-bundle\") pod \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.894866 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-util\") pod \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.894934 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4vtz\" (UniqueName: \"kubernetes.io/projected/11d9feea-2c1d-48e4-9cf4-bde172f9faea-kube-api-access-x4vtz\") pod \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\" (UID: \"11d9feea-2c1d-48e4-9cf4-bde172f9faea\") " Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.899910 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-bundle" (OuterVolumeSpecName: "bundle") pod "11d9feea-2c1d-48e4-9cf4-bde172f9faea" (UID: "11d9feea-2c1d-48e4-9cf4-bde172f9faea"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.903955 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d9feea-2c1d-48e4-9cf4-bde172f9faea-kube-api-access-x4vtz" (OuterVolumeSpecName: "kube-api-access-x4vtz") pod "11d9feea-2c1d-48e4-9cf4-bde172f9faea" (UID: "11d9feea-2c1d-48e4-9cf4-bde172f9faea"). InnerVolumeSpecName "kube-api-access-x4vtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.915961 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-util" (OuterVolumeSpecName: "util") pod "11d9feea-2c1d-48e4-9cf4-bde172f9faea" (UID: "11d9feea-2c1d-48e4-9cf4-bde172f9faea"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.997050 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4vtz\" (UniqueName: \"kubernetes.io/projected/11d9feea-2c1d-48e4-9cf4-bde172f9faea-kube-api-access-x4vtz\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.997093 4808 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:45 crc kubenswrapper[4808]: I0217 16:05:45.997113 4808 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11d9feea-2c1d-48e4-9cf4-bde172f9faea-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:46 crc kubenswrapper[4808]: I0217 16:05:46.482749 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" event={"ID":"11d9feea-2c1d-48e4-9cf4-bde172f9faea","Type":"ContainerDied","Data":"1cf44481943a899439fc15a8de81c91b62c9ca1868a444f67bef4eb79a7c7f80"} Feb 17 16:05:46 crc kubenswrapper[4808]: I0217 16:05:46.482809 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cf44481943a899439fc15a8de81c91b62c9ca1868a444f67bef4eb79a7c7f80" Feb 17 16:05:46 crc kubenswrapper[4808]: I0217 16:05:46.482867 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm" Feb 17 16:05:51 crc kubenswrapper[4808]: I0217 16:05:51.591987 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:05:51 crc kubenswrapper[4808]: I0217 16:05:51.592598 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.666772 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tgvlh"] Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.667116 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-controller" containerID="cri-o://26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" gracePeriod=30 Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.667139 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="northd" containerID="cri-o://28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" gracePeriod=30 Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.667213 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-node" containerID="cri-o://80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" gracePeriod=30 Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.667221 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-acl-logging" containerID="cri-o://5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" gracePeriod=30 Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.667203 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="sbdb" containerID="cri-o://363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" gracePeriod=30 Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.667228 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="nbdb" containerID="cri-o://58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" gracePeriod=30 Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.667283 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" gracePeriod=30 Feb 17 16:05:52 crc kubenswrapper[4808]: I0217 16:05:52.735745 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" containerID="cri-o://1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" gracePeriod=30 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.385122 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/3.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.389228 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovn-acl-logging/0.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.389840 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovn-controller/0.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.390373 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516229 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2q7qz"] Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516444 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="northd" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516459 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="northd" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516470 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerName="extract" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516476 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerName="extract" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516485 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="nbdb" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516491 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="nbdb" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516499 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516504 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516511 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerName="pull" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516517 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerName="pull" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516525 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516532 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516540 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerName="util" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516546 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerName="util" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516555 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516561 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516586 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-node" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516592 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-node" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516599 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="sbdb" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516605 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="sbdb" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516613 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516619 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516625 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-acl-logging" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516630 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-acl-logging" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516638 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kubecfg-setup" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516644 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kubecfg-setup" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516652 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516657 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516747 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516757 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="sbdb" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516764 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-node" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516771 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="kube-rbac-proxy-ovn-metrics" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516778 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516785 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovn-acl-logging" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516791 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d9feea-2c1d-48e4-9cf4-bde172f9faea" containerName="extract" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516799 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="northd" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516806 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516813 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="nbdb" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516820 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516826 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.516911 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.516919 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.517001 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.517096 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.517102 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerName="ovnkube-controller" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.518498 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.538907 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/2.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.539591 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/1.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.539639 4808 generic.go:334] "Generic (PLEG): container finished" podID="18916d6d-e063-40a0-816f-554f95cd2956" containerID="a6961e0c67ed7d26f44519f3b555fda05bf5219f4205ed2528b68394bcb91f2c" exitCode=2 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.539699 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerDied","Data":"a6961e0c67ed7d26f44519f3b555fda05bf5219f4205ed2528b68394bcb91f2c"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.539965 4808 scope.go:117] "RemoveContainer" containerID="7bdc6e86716d40b6c433ccb24a97665384190bfe2ab5ddf0868109d78826917e" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.540417 4808 scope.go:117] "RemoveContainer" containerID="a6961e0c67ed7d26f44519f3b555fda05bf5219f4205ed2528b68394bcb91f2c" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.540659 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-msgfd_openshift-multus(18916d6d-e063-40a0-816f-554f95cd2956)\"" pod="openshift-multus/multus-msgfd" podUID="18916d6d-e063-40a0-816f-554f95cd2956" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.542215 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovnkube-controller/3.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.544149 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovn-acl-logging/0.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.544543 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tgvlh_5748f02a-e3dd-47c7-b89d-b472c718e593/ovn-controller/0.log" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.544928 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" exitCode=0 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545007 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" exitCode=0 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545070 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" exitCode=0 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545121 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" exitCode=0 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545178 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" exitCode=0 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545230 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" exitCode=0 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545283 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" exitCode=143 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545348 4808 generic.go:334] "Generic (PLEG): container finished" podID="5748f02a-e3dd-47c7-b89d-b472c718e593" containerID="26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" exitCode=143 Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545048 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545037 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545553 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545568 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545592 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545602 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545610 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545620 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545631 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545637 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545643 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545648 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545654 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545659 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545664 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545669 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545675 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545681 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545690 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545696 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545702 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545707 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545712 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545718 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545723 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545729 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545734 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545740 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545746 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545755 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545760 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545766 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545772 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545778 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545783 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545789 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545795 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545800 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545805 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545812 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tgvlh" event={"ID":"5748f02a-e3dd-47c7-b89d-b472c718e593","Type":"ContainerDied","Data":"ad60f37f93ae8b251f62c5805faa94eb63cd424e9052d1f8a1dad95e11326ec9"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545820 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545826 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545831 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545836 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545842 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545847 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545852 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545858 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545864 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.545870 4808 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.560070 4808 scope.go:117] "RemoveContainer" containerID="1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.579031 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586200 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-config\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586258 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-script-lib\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586291 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-bin\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586350 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586398 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-systemd\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586771 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586819 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-etc-openvswitch\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586844 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.586856 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-log-socket\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587071 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-netns\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587080 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587137 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-openvswitch\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587150 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-log-socket" (OuterVolumeSpecName: "log-socket") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587157 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-netd\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587179 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587202 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-ovn\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587234 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587266 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnzj8\" (UniqueName: \"kubernetes.io/projected/5748f02a-e3dd-47c7-b89d-b472c718e593-kube-api-access-qnzj8\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587190 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587207 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587295 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-env-overrides\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587230 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587317 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-var-lib-openvswitch\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587300 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587352 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-kubelet\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587373 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748f02a-e3dd-47c7-b89d-b472c718e593-ovn-node-metrics-cert\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587390 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-node-log\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587410 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-ovn-kubernetes\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587425 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-systemd-units\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587420 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587457 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-slash" (OuterVolumeSpecName: "host-slash") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587440 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-slash\") pod \"5748f02a-e3dd-47c7-b89d-b472c718e593\" (UID: \"5748f02a-e3dd-47c7-b89d-b472c718e593\") " Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587478 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-node-log" (OuterVolumeSpecName: "node-log") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587504 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587600 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587636 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-slash\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587639 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587672 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-etc-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587726 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-systemd\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587761 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-systemd-units\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587779 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-cni-netd\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587794 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-log-socket\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587819 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-cni-bin\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587842 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587859 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587878 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-ovnkube-config\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587908 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-kubelet\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587931 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-ovn\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587947 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-env-overrides\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587974 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-node-log\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588000 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-ovnkube-script-lib\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588016 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60c87e4f-f758-4e3e-a812-1636091ba578-ovn-node-metrics-cert\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588035 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-run-netns\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588064 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-var-lib-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588086 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-run-ovn-kubernetes\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588103 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8xth\" (UniqueName: \"kubernetes.io/projected/60c87e4f-f758-4e3e-a812-1636091ba578-kube-api-access-l8xth\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588143 4808 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588153 4808 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588163 4808 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588173 4808 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588182 4808 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588192 4808 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588255 4808 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588278 4808 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-node-log\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588291 4808 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588307 4808 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-slash\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588319 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588336 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5748f02a-e3dd-47c7-b89d-b472c718e593-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588349 4808 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588361 4808 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588374 4808 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-log-socket\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.588387 4808 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.587657 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.598160 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5748f02a-e3dd-47c7-b89d-b472c718e593-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.598605 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5748f02a-e3dd-47c7-b89d-b472c718e593-kube-api-access-qnzj8" (OuterVolumeSpecName: "kube-api-access-qnzj8") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "kube-api-access-qnzj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.610823 4808 scope.go:117] "RemoveContainer" containerID="363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.633124 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5748f02a-e3dd-47c7-b89d-b472c718e593" (UID: "5748f02a-e3dd-47c7-b89d-b472c718e593"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.637057 4808 scope.go:117] "RemoveContainer" containerID="58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.664938 4808 scope.go:117] "RemoveContainer" containerID="28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690153 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-systemd-units\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690204 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-log-socket\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690223 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-cni-netd\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690248 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-cni-bin\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690268 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690289 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690304 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-ovnkube-config\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690325 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-kubelet\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690347 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-ovn\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690363 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-env-overrides\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690380 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-node-log\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690404 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60c87e4f-f758-4e3e-a812-1636091ba578-ovn-node-metrics-cert\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690419 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-ovnkube-script-lib\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690438 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-run-netns\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690461 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-var-lib-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690483 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-run-ovn-kubernetes\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690500 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8xth\" (UniqueName: \"kubernetes.io/projected/60c87e4f-f758-4e3e-a812-1636091ba578-kube-api-access-l8xth\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690516 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-slash\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690532 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-etc-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690550 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-systemd\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690597 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnzj8\" (UniqueName: \"kubernetes.io/projected/5748f02a-e3dd-47c7-b89d-b472c718e593-kube-api-access-qnzj8\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690609 4808 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5748f02a-e3dd-47c7-b89d-b472c718e593-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690618 4808 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690626 4808 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5748f02a-e3dd-47c7-b89d-b472c718e593-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690678 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-systemd\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690714 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-systemd-units\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690735 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-log-socket\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690756 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-cni-netd\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690775 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-cni-bin\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690796 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.690816 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.691419 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-ovnkube-config\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.691452 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-kubelet\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.691475 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-run-ovn\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.691780 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-env-overrides\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.691813 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-node-log\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.692098 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-run-ovn-kubernetes\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.692276 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-slash\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.692728 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-etc-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.692829 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-var-lib-openvswitch\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.692809 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/60c87e4f-f758-4e3e-a812-1636091ba578-host-run-netns\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.692783 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/60c87e4f-f758-4e3e-a812-1636091ba578-ovnkube-script-lib\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.699041 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/60c87e4f-f758-4e3e-a812-1636091ba578-ovn-node-metrics-cert\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.703031 4808 scope.go:117] "RemoveContainer" containerID="4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.727131 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8xth\" (UniqueName: \"kubernetes.io/projected/60c87e4f-f758-4e3e-a812-1636091ba578-kube-api-access-l8xth\") pod \"ovnkube-node-2q7qz\" (UID: \"60c87e4f-f758-4e3e-a812-1636091ba578\") " pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.737829 4808 scope.go:117] "RemoveContainer" containerID="80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.755038 4808 scope.go:117] "RemoveContainer" containerID="5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.784464 4808 scope.go:117] "RemoveContainer" containerID="26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.803774 4808 scope.go:117] "RemoveContainer" containerID="35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.825103 4808 scope.go:117] "RemoveContainer" containerID="1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.825821 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": container with ID starting with 1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05 not found: ID does not exist" containerID="1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.825866 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} err="failed to get container status \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": rpc error: code = NotFound desc = could not find container \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": container with ID starting with 1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.825893 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.826279 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": container with ID starting with a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f not found: ID does not exist" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.826331 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} err="failed to get container status \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": rpc error: code = NotFound desc = could not find container \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": container with ID starting with a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.826366 4808 scope.go:117] "RemoveContainer" containerID="363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.826718 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": container with ID starting with 363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf not found: ID does not exist" containerID="363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.826742 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} err="failed to get container status \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": rpc error: code = NotFound desc = could not find container \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": container with ID starting with 363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.826762 4808 scope.go:117] "RemoveContainer" containerID="58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.827063 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": container with ID starting with 58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814 not found: ID does not exist" containerID="58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.827104 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} err="failed to get container status \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": rpc error: code = NotFound desc = could not find container \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": container with ID starting with 58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.827131 4808 scope.go:117] "RemoveContainer" containerID="28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.827617 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": container with ID starting with 28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13 not found: ID does not exist" containerID="28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.827642 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} err="failed to get container status \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": rpc error: code = NotFound desc = could not find container \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": container with ID starting with 28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.827657 4808 scope.go:117] "RemoveContainer" containerID="4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.828050 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": container with ID starting with 4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f not found: ID does not exist" containerID="4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.828083 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} err="failed to get container status \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": rpc error: code = NotFound desc = could not find container \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": container with ID starting with 4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.828102 4808 scope.go:117] "RemoveContainer" containerID="80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.828354 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": container with ID starting with 80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09 not found: ID does not exist" containerID="80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.828384 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} err="failed to get container status \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": rpc error: code = NotFound desc = could not find container \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": container with ID starting with 80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.828404 4808 scope.go:117] "RemoveContainer" containerID="5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.828662 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": container with ID starting with 5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2 not found: ID does not exist" containerID="5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.828688 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} err="failed to get container status \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": rpc error: code = NotFound desc = could not find container \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": container with ID starting with 5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.828704 4808 scope.go:117] "RemoveContainer" containerID="26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.828959 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": container with ID starting with 26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a not found: ID does not exist" containerID="26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.828985 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} err="failed to get container status \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": rpc error: code = NotFound desc = could not find container \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": container with ID starting with 26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.829004 4808 scope.go:117] "RemoveContainer" containerID="35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437" Feb 17 16:05:53 crc kubenswrapper[4808]: E0217 16:05:53.829237 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": container with ID starting with 35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437 not found: ID does not exist" containerID="35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.829274 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} err="failed to get container status \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": rpc error: code = NotFound desc = could not find container \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": container with ID starting with 35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.829294 4808 scope.go:117] "RemoveContainer" containerID="1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.830007 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} err="failed to get container status \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": rpc error: code = NotFound desc = could not find container \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": container with ID starting with 1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.830029 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.830234 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} err="failed to get container status \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": rpc error: code = NotFound desc = could not find container \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": container with ID starting with a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.830257 4808 scope.go:117] "RemoveContainer" containerID="363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.830467 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} err="failed to get container status \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": rpc error: code = NotFound desc = could not find container \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": container with ID starting with 363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.830492 4808 scope.go:117] "RemoveContainer" containerID="58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.830727 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.831011 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} err="failed to get container status \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": rpc error: code = NotFound desc = could not find container \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": container with ID starting with 58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.831035 4808 scope.go:117] "RemoveContainer" containerID="28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.831266 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} err="failed to get container status \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": rpc error: code = NotFound desc = could not find container \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": container with ID starting with 28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.831302 4808 scope.go:117] "RemoveContainer" containerID="4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.831901 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} err="failed to get container status \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": rpc error: code = NotFound desc = could not find container \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": container with ID starting with 4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.831993 4808 scope.go:117] "RemoveContainer" containerID="80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.833996 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} err="failed to get container status \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": rpc error: code = NotFound desc = could not find container \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": container with ID starting with 80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.834098 4808 scope.go:117] "RemoveContainer" containerID="5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.834488 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} err="failed to get container status \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": rpc error: code = NotFound desc = could not find container \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": container with ID starting with 5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.834513 4808 scope.go:117] "RemoveContainer" containerID="26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.834823 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} err="failed to get container status \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": rpc error: code = NotFound desc = could not find container \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": container with ID starting with 26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.834850 4808 scope.go:117] "RemoveContainer" containerID="35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835087 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} err="failed to get container status \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": rpc error: code = NotFound desc = could not find container \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": container with ID starting with 35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835107 4808 scope.go:117] "RemoveContainer" containerID="1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835341 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} err="failed to get container status \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": rpc error: code = NotFound desc = could not find container \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": container with ID starting with 1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835361 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835562 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} err="failed to get container status \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": rpc error: code = NotFound desc = could not find container \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": container with ID starting with a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835598 4808 scope.go:117] "RemoveContainer" containerID="363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835965 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} err="failed to get container status \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": rpc error: code = NotFound desc = could not find container \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": container with ID starting with 363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.835985 4808 scope.go:117] "RemoveContainer" containerID="58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.836213 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} err="failed to get container status \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": rpc error: code = NotFound desc = could not find container \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": container with ID starting with 58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.836232 4808 scope.go:117] "RemoveContainer" containerID="28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.836464 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} err="failed to get container status \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": rpc error: code = NotFound desc = could not find container \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": container with ID starting with 28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.836485 4808 scope.go:117] "RemoveContainer" containerID="4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.836822 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} err="failed to get container status \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": rpc error: code = NotFound desc = could not find container \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": container with ID starting with 4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.836843 4808 scope.go:117] "RemoveContainer" containerID="80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837078 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} err="failed to get container status \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": rpc error: code = NotFound desc = could not find container \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": container with ID starting with 80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837099 4808 scope.go:117] "RemoveContainer" containerID="5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837327 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} err="failed to get container status \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": rpc error: code = NotFound desc = could not find container \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": container with ID starting with 5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837344 4808 scope.go:117] "RemoveContainer" containerID="26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837613 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} err="failed to get container status \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": rpc error: code = NotFound desc = could not find container \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": container with ID starting with 26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837633 4808 scope.go:117] "RemoveContainer" containerID="35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837858 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} err="failed to get container status \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": rpc error: code = NotFound desc = could not find container \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": container with ID starting with 35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.837939 4808 scope.go:117] "RemoveContainer" containerID="1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.838244 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05"} err="failed to get container status \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": rpc error: code = NotFound desc = could not find container \"1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05\": container with ID starting with 1385665b452c9c54279b496b70105068cc9ac986718df98cc735fc09bcd4ac05 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.838325 4808 scope.go:117] "RemoveContainer" containerID="a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.838649 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f"} err="failed to get container status \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": rpc error: code = NotFound desc = could not find container \"a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f\": container with ID starting with a3c59386483fde848e69cdd193832875e9c1cbe4725d43032090c9a62494c40f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.838740 4808 scope.go:117] "RemoveContainer" containerID="363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839059 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf"} err="failed to get container status \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": rpc error: code = NotFound desc = could not find container \"363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf\": container with ID starting with 363a0f82d4347e522c91f27597bc03aa33f75e0399760fcc5cfdc1772eb6aabf not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839080 4808 scope.go:117] "RemoveContainer" containerID="58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839313 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814"} err="failed to get container status \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": rpc error: code = NotFound desc = could not find container \"58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814\": container with ID starting with 58ee49f9d112bd2fe6a3cc5f499d1be9d4c51f2741ffb9bf24754a46a0a12814 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839334 4808 scope.go:117] "RemoveContainer" containerID="28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839561 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13"} err="failed to get container status \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": rpc error: code = NotFound desc = could not find container \"28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13\": container with ID starting with 28b04c73bfd5eadf6c1e436f6a7150074ee8357cef79b0e040c1d9f3809aab13 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839594 4808 scope.go:117] "RemoveContainer" containerID="4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839836 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f"} err="failed to get container status \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": rpc error: code = NotFound desc = could not find container \"4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f\": container with ID starting with 4c263e6c0445a0badadcbc5b50c370fd4ee9a4d0cb3e535e3d7944e938cbea4f not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.839855 4808 scope.go:117] "RemoveContainer" containerID="80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.840090 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09"} err="failed to get container status \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": rpc error: code = NotFound desc = could not find container \"80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09\": container with ID starting with 80ab3de82f2a3f22425c34c9b4abcbc925a7076e3f2ce3b952f10aeb856e1c09 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.840109 4808 scope.go:117] "RemoveContainer" containerID="5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.841446 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2"} err="failed to get container status \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": rpc error: code = NotFound desc = could not find container \"5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2\": container with ID starting with 5e9e729fa5a68d07a0f7e4a86114ed39e4128428e5a21c2f3f113f869adc9fc2 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.841530 4808 scope.go:117] "RemoveContainer" containerID="26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.858809 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a"} err="failed to get container status \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": rpc error: code = NotFound desc = could not find container \"26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a\": container with ID starting with 26a9d62d12c66018649ffcb84c69e20f1c08f3241bdb02ba4306b08dbe5ec49a not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.858847 4808 scope.go:117] "RemoveContainer" containerID="35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.859927 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437"} err="failed to get container status \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": rpc error: code = NotFound desc = could not find container \"35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437\": container with ID starting with 35ad82d8d6c808887e0f7bb17eaccaab2d2ecddd88ac265b2746a566c937a437 not found: ID does not exist" Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.909997 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tgvlh"] Feb 17 16:05:53 crc kubenswrapper[4808]: I0217 16:05:53.914202 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tgvlh"] Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.553863 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/2.log" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.555786 4808 generic.go:334] "Generic (PLEG): container finished" podID="60c87e4f-f758-4e3e-a812-1636091ba578" containerID="891243d5714197c2aa551a24c76441926698db9cb51175d7b6f86c558f055955" exitCode=0 Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.555823 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerDied","Data":"891243d5714197c2aa551a24c76441926698db9cb51175d7b6f86c558f055955"} Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.555871 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"ae0d57d73f5fc05ce5ec2e4de27484ba682d37ebfa253a15a86795aafd48e9a2"} Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.580468 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf"] Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.582165 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.588386 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-h7dtr" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.588914 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.588956 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.600864 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxxjl\" (UniqueName: \"kubernetes.io/projected/038219cb-02e4-4451-b0d4-3e6af1518769-kube-api-access-bxxjl\") pod \"obo-prometheus-operator-68bc856cb9-lshnf\" (UID: \"038219cb-02e4-4451-b0d4-3e6af1518769\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.705905 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24"] Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.706477 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.709310 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxxjl\" (UniqueName: \"kubernetes.io/projected/038219cb-02e4-4451-b0d4-3e6af1518769-kube-api-access-bxxjl\") pod \"obo-prometheus-operator-68bc856cb9-lshnf\" (UID: \"038219cb-02e4-4451-b0d4-3e6af1518769\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.711433 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-nbl5d" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.711743 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.716932 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5"] Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.717627 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.757165 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxxjl\" (UniqueName: \"kubernetes.io/projected/038219cb-02e4-4451-b0d4-3e6af1518769-kube-api-access-bxxjl\") pod \"obo-prometheus-operator-68bc856cb9-lshnf\" (UID: \"038219cb-02e4-4451-b0d4-3e6af1518769\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.810420 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b8a3138-8c3d-434b-9069-8cafc18a0111-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5\" (UID: \"2b8a3138-8c3d-434b-9069-8cafc18a0111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.810498 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d2656af-cd69-49ff-8d35-7c81fa4c4693-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24\" (UID: \"6d2656af-cd69-49ff-8d35-7c81fa4c4693\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.810519 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d2656af-cd69-49ff-8d35-7c81fa4c4693-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24\" (UID: \"6d2656af-cd69-49ff-8d35-7c81fa4c4693\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.810558 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b8a3138-8c3d-434b-9069-8cafc18a0111-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5\" (UID: \"2b8a3138-8c3d-434b-9069-8cafc18a0111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.837625 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7nl9q"] Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.838316 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.843212 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.843401 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-7x9g9" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.912173 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b8a3138-8c3d-434b-9069-8cafc18a0111-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5\" (UID: \"2b8a3138-8c3d-434b-9069-8cafc18a0111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.912215 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg2fm\" (UniqueName: \"kubernetes.io/projected/c7703980-a631-414f-b3fc-a76dfdd1e085-kube-api-access-bg2fm\") pod \"observability-operator-59bdc8b94-7nl9q\" (UID: \"c7703980-a631-414f-b3fc-a76dfdd1e085\") " pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.912275 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b8a3138-8c3d-434b-9069-8cafc18a0111-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5\" (UID: \"2b8a3138-8c3d-434b-9069-8cafc18a0111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.912298 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7703980-a631-414f-b3fc-a76dfdd1e085-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7nl9q\" (UID: \"c7703980-a631-414f-b3fc-a76dfdd1e085\") " pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.912371 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d2656af-cd69-49ff-8d35-7c81fa4c4693-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24\" (UID: \"6d2656af-cd69-49ff-8d35-7c81fa4c4693\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.912474 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d2656af-cd69-49ff-8d35-7c81fa4c4693-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24\" (UID: \"6d2656af-cd69-49ff-8d35-7c81fa4c4693\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.916416 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d2656af-cd69-49ff-8d35-7c81fa4c4693-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24\" (UID: \"6d2656af-cd69-49ff-8d35-7c81fa4c4693\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.923664 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2b8a3138-8c3d-434b-9069-8cafc18a0111-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5\" (UID: \"2b8a3138-8c3d-434b-9069-8cafc18a0111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.929878 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.931136 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d2656af-cd69-49ff-8d35-7c81fa4c4693-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24\" (UID: \"6d2656af-cd69-49ff-8d35-7c81fa4c4693\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:54 crc kubenswrapper[4808]: I0217 16:05:54.935403 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2b8a3138-8c3d-434b-9069-8cafc18a0111-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5\" (UID: \"2b8a3138-8c3d-434b-9069-8cafc18a0111\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:54 crc kubenswrapper[4808]: E0217 16:05:54.952741 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(0c589b65d82eb0fdbf770e480e66cfff62221df77fcefc9630953297fe88a9eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:54 crc kubenswrapper[4808]: E0217 16:05:54.952805 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(0c589b65d82eb0fdbf770e480e66cfff62221df77fcefc9630953297fe88a9eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:05:54 crc kubenswrapper[4808]: E0217 16:05:54.952823 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(0c589b65d82eb0fdbf770e480e66cfff62221df77fcefc9630953297fe88a9eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:05:54 crc kubenswrapper[4808]: E0217 16:05:54.952855 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators(038219cb-02e4-4451-b0d4-3e6af1518769)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators(038219cb-02e4-4451-b0d4-3e6af1518769)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(0c589b65d82eb0fdbf770e480e66cfff62221df77fcefc9630953297fe88a9eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" podUID="038219cb-02e4-4451-b0d4-3e6af1518769" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.013565 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg2fm\" (UniqueName: \"kubernetes.io/projected/c7703980-a631-414f-b3fc-a76dfdd1e085-kube-api-access-bg2fm\") pod \"observability-operator-59bdc8b94-7nl9q\" (UID: \"c7703980-a631-414f-b3fc-a76dfdd1e085\") " pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.013664 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7703980-a631-414f-b3fc-a76dfdd1e085-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7nl9q\" (UID: \"c7703980-a631-414f-b3fc-a76dfdd1e085\") " pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.016090 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pkvl8"] Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.016769 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.018295 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7703980-a631-414f-b3fc-a76dfdd1e085-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7nl9q\" (UID: \"c7703980-a631-414f-b3fc-a76dfdd1e085\") " pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.020863 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-dqww6" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.037604 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.038441 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg2fm\" (UniqueName: \"kubernetes.io/projected/c7703980-a631-414f-b3fc-a76dfdd1e085-kube-api-access-bg2fm\") pod \"observability-operator-59bdc8b94-7nl9q\" (UID: \"c7703980-a631-414f-b3fc-a76dfdd1e085\") " pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.056143 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(818e5155910ea6ad59c90fc200700170a94afcec59a1a3b3f6aa82388d27c2d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.056211 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(818e5155910ea6ad59c90fc200700170a94afcec59a1a3b3f6aa82388d27c2d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.056230 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(818e5155910ea6ad59c90fc200700170a94afcec59a1a3b3f6aa82388d27c2d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.056291 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators(6d2656af-cd69-49ff-8d35-7c81fa4c4693)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators(6d2656af-cd69-49ff-8d35-7c81fa4c4693)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(818e5155910ea6ad59c90fc200700170a94afcec59a1a3b3f6aa82388d27c2d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" podUID="6d2656af-cd69-49ff-8d35-7c81fa4c4693" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.067008 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.090669 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(97dda1c5c719f178cc3de54b2cfb0238a02f7d3dc8fecc0446b043bec34ce70b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.090735 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(97dda1c5c719f178cc3de54b2cfb0238a02f7d3dc8fecc0446b043bec34ce70b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.090768 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(97dda1c5c719f178cc3de54b2cfb0238a02f7d3dc8fecc0446b043bec34ce70b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.090838 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators(2b8a3138-8c3d-434b-9069-8cafc18a0111)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators(2b8a3138-8c3d-434b-9069-8cafc18a0111)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(97dda1c5c719f178cc3de54b2cfb0238a02f7d3dc8fecc0446b043bec34ce70b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" podUID="2b8a3138-8c3d-434b-9069-8cafc18a0111" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.114902 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvvcn\" (UniqueName: \"kubernetes.io/projected/b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab-kube-api-access-dvvcn\") pod \"perses-operator-5bf474d74f-pkvl8\" (UID: \"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab\") " pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.114963 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pkvl8\" (UID: \"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab\") " pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.152517 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5748f02a-e3dd-47c7-b89d-b472c718e593" path="/var/lib/kubelet/pods/5748f02a-e3dd-47c7-b89d-b472c718e593/volumes" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.162941 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.188781 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(460fde54dfa67f209f8ece87bc25964aa98e86670dd9501db630003a221a1434): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.188852 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(460fde54dfa67f209f8ece87bc25964aa98e86670dd9501db630003a221a1434): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.188873 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(460fde54dfa67f209f8ece87bc25964aa98e86670dd9501db630003a221a1434): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.188923 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-7nl9q_openshift-operators(c7703980-a631-414f-b3fc-a76dfdd1e085)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-7nl9q_openshift-operators(c7703980-a631-414f-b3fc-a76dfdd1e085)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(460fde54dfa67f209f8ece87bc25964aa98e86670dd9501db630003a221a1434): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" podUID="c7703980-a631-414f-b3fc-a76dfdd1e085" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.216487 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvvcn\" (UniqueName: \"kubernetes.io/projected/b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab-kube-api-access-dvvcn\") pod \"perses-operator-5bf474d74f-pkvl8\" (UID: \"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab\") " pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.216600 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pkvl8\" (UID: \"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab\") " pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.217491 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pkvl8\" (UID: \"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab\") " pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.240920 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvvcn\" (UniqueName: \"kubernetes.io/projected/b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab-kube-api-access-dvvcn\") pod \"perses-operator-5bf474d74f-pkvl8\" (UID: \"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab\") " pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.339947 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.358191 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(15da61cdc63c72e2fdad213823c8f2e78caac16ff12f4f0a8c6229e53c49e518): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.358257 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(15da61cdc63c72e2fdad213823c8f2e78caac16ff12f4f0a8c6229e53c49e518): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.358282 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(15da61cdc63c72e2fdad213823c8f2e78caac16ff12f4f0a8c6229e53c49e518): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:05:55 crc kubenswrapper[4808]: E0217 16:05:55.358343 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-pkvl8_openshift-operators(b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-pkvl8_openshift-operators(b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(15da61cdc63c72e2fdad213823c8f2e78caac16ff12f4f0a8c6229e53c49e518): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" podUID="b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab" Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.564153 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"ee57b94cab0b03328a446cdf0ae564fea660e269b2587ae2cc143ac045e98980"} Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.564191 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"f0afa1fc9ee7af0b73896d96c3b6c8e7d59ce02d7e7b4baa4b2462925eb7159a"} Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.564202 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"139a35b7f1e25b6300d41c7bbeb759d48a42a0f5b0ead08cb8437ca8ff60d5f2"} Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.564211 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"14e480693f2117575fae84765eb1818fcff9d17e172dcdc8602f08558cc059b0"} Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.564219 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"7e19c5b68e5100b134fd90854f3c6959f62854a72d5c94541b09aed5b4f8f89b"} Feb 17 16:05:55 crc kubenswrapper[4808]: I0217 16:05:55.564228 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"7766663331b10bfbe045973076d5aa51a9dff0225e6a2f9d0fb225d78ff287be"} Feb 17 16:05:57 crc kubenswrapper[4808]: I0217 16:05:57.581381 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"b397ee82843a1a5ec091822d16025b85f95efbbf1af5d1d8088446cb3f45843c"} Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.600121 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" event={"ID":"60c87e4f-f758-4e3e-a812-1636091ba578","Type":"ContainerStarted","Data":"153a45d841ae98960df594c65a735856b8792637444cdab267529897e8dbff9b"} Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.600845 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.600867 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.600881 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.636754 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf"] Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.636925 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.637024 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.637528 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.638613 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.643879 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5"] Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.644050 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.644613 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.646443 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" podStartSLOduration=7.646420303 podStartE2EDuration="7.646420303s" podCreationTimestamp="2026-02-17 16:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:06:00.628726112 +0000 UTC m=+724.145085195" watchObservedRunningTime="2026-02-17 16:06:00.646420303 +0000 UTC m=+724.162779386" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.671830 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24"] Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.671987 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.672769 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.674505 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pkvl8"] Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.674671 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.675214 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.677962 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(b456343ccfd9f1afe3374da29f1ba3760643f04d6051e650045a0ac2385ab0f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.678018 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(b456343ccfd9f1afe3374da29f1ba3760643f04d6051e650045a0ac2385ab0f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.678045 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(b456343ccfd9f1afe3374da29f1ba3760643f04d6051e650045a0ac2385ab0f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.678089 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators(038219cb-02e4-4451-b0d4-3e6af1518769)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators(038219cb-02e4-4451-b0d4-3e6af1518769)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-lshnf_openshift-operators_038219cb-02e4-4451-b0d4-3e6af1518769_0(b456343ccfd9f1afe3374da29f1ba3760643f04d6051e650045a0ac2385ab0f6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" podUID="038219cb-02e4-4451-b0d4-3e6af1518769" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.684482 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(61728f6d021b62f247506d978ec227fb1c5943b28a9867e6d17a32f2292655e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.684552 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(61728f6d021b62f247506d978ec227fb1c5943b28a9867e6d17a32f2292655e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.684597 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(61728f6d021b62f247506d978ec227fb1c5943b28a9867e6d17a32f2292655e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.684645 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators(2b8a3138-8c3d-434b-9069-8cafc18a0111)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators(2b8a3138-8c3d-434b-9069-8cafc18a0111)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_openshift-operators_2b8a3138-8c3d-434b-9069-8cafc18a0111_0(61728f6d021b62f247506d978ec227fb1c5943b28a9867e6d17a32f2292655e4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" podUID="2b8a3138-8c3d-434b-9069-8cafc18a0111" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.731312 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7nl9q"] Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.731474 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:00 crc kubenswrapper[4808]: I0217 16:06:00.732138 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.769724 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(dbfc53532c3456391e7fb5aaa2296fb573ecea3510258035c5e589290d07c4ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.769818 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(dbfc53532c3456391e7fb5aaa2296fb573ecea3510258035c5e589290d07c4ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.769846 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(dbfc53532c3456391e7fb5aaa2296fb573ecea3510258035c5e589290d07c4ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.769897 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-pkvl8_openshift-operators(b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-pkvl8_openshift-operators(b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-pkvl8_openshift-operators_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab_0(dbfc53532c3456391e7fb5aaa2296fb573ecea3510258035c5e589290d07c4ea): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" podUID="b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.774503 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(4737dacb8b8e1ebc8fba4282a225103fbe1300fcfc6d068cc82f9b92c4d47382): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.774579 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(4737dacb8b8e1ebc8fba4282a225103fbe1300fcfc6d068cc82f9b92c4d47382): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.774599 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(4737dacb8b8e1ebc8fba4282a225103fbe1300fcfc6d068cc82f9b92c4d47382): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.774660 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators(6d2656af-cd69-49ff-8d35-7c81fa4c4693)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators(6d2656af-cd69-49ff-8d35-7c81fa4c4693)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_openshift-operators_6d2656af-cd69-49ff-8d35-7c81fa4c4693_0(4737dacb8b8e1ebc8fba4282a225103fbe1300fcfc6d068cc82f9b92c4d47382): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" podUID="6d2656af-cd69-49ff-8d35-7c81fa4c4693" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.793829 4808 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(9dae60cdbedd2f47631d569523ad840a1971860516b898c235029c8f90f8cc4c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.793930 4808 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(9dae60cdbedd2f47631d569523ad840a1971860516b898c235029c8f90f8cc4c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.793957 4808 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(9dae60cdbedd2f47631d569523ad840a1971860516b898c235029c8f90f8cc4c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:00 crc kubenswrapper[4808]: E0217 16:06:00.794009 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-7nl9q_openshift-operators(c7703980-a631-414f-b3fc-a76dfdd1e085)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-7nl9q_openshift-operators(c7703980-a631-414f-b3fc-a76dfdd1e085)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7nl9q_openshift-operators_c7703980-a631-414f-b3fc-a76dfdd1e085_0(9dae60cdbedd2f47631d569523ad840a1971860516b898c235029c8f90f8cc4c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" podUID="c7703980-a631-414f-b3fc-a76dfdd1e085" Feb 17 16:06:06 crc kubenswrapper[4808]: I0217 16:06:06.146019 4808 scope.go:117] "RemoveContainer" containerID="a6961e0c67ed7d26f44519f3b555fda05bf5219f4205ed2528b68394bcb91f2c" Feb 17 16:06:06 crc kubenswrapper[4808]: I0217 16:06:06.644872 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-msgfd_18916d6d-e063-40a0-816f-554f95cd2956/kube-multus/2.log" Feb 17 16:06:06 crc kubenswrapper[4808]: I0217 16:06:06.645326 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-msgfd" event={"ID":"18916d6d-e063-40a0-816f-554f95cd2956","Type":"ContainerStarted","Data":"b2be79d131dfd425911d83bcd2437def405f952539da3aa726991db602fe1e17"} Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.144810 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.144929 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.145688 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.145781 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.407622 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24"] Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.455964 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf"] Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.674701 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" event={"ID":"6d2656af-cd69-49ff-8d35-7c81fa4c4693","Type":"ContainerStarted","Data":"315ce1493cadfe027f2be0c66995e53f8d57e66808c72c5c73f5a6d7953d7001"} Feb 17 16:06:12 crc kubenswrapper[4808]: I0217 16:06:12.675759 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" event={"ID":"038219cb-02e4-4451-b0d4-3e6af1518769","Type":"ContainerStarted","Data":"6feedadaaaffce9323d260982aca6f22ce23b4483b518e9cd46fd3c2081fd6aa"} Feb 17 16:06:13 crc kubenswrapper[4808]: I0217 16:06:13.145745 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:13 crc kubenswrapper[4808]: I0217 16:06:13.146126 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:13 crc kubenswrapper[4808]: I0217 16:06:13.337832 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pkvl8"] Feb 17 16:06:13 crc kubenswrapper[4808]: I0217 16:06:13.692678 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" event={"ID":"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab","Type":"ContainerStarted","Data":"0f1e424d6710d90da9306f1017501fb0f80ca068e4469ff6268b207067114701"} Feb 17 16:06:14 crc kubenswrapper[4808]: I0217 16:06:14.144820 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:14 crc kubenswrapper[4808]: I0217 16:06:14.145523 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:14 crc kubenswrapper[4808]: I0217 16:06:14.368177 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7nl9q"] Feb 17 16:06:14 crc kubenswrapper[4808]: I0217 16:06:14.699451 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" event={"ID":"c7703980-a631-414f-b3fc-a76dfdd1e085","Type":"ContainerStarted","Data":"14af039fdf7c3008c63aa220221515c0b42dcaa912e3a1c9ad8e3e5786a07af3"} Feb 17 16:06:16 crc kubenswrapper[4808]: I0217 16:06:16.145383 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:06:16 crc kubenswrapper[4808]: I0217 16:06:16.146140 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" Feb 17 16:06:16 crc kubenswrapper[4808]: I0217 16:06:16.446798 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5"] Feb 17 16:06:16 crc kubenswrapper[4808]: W0217 16:06:16.472725 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b8a3138_8c3d_434b_9069_8cafc18a0111.slice/crio-cc0ae8ebf18f35dcc09cec26c79f5e7b87893fbb9f28e913d054e7f279031da9 WatchSource:0}: Error finding container cc0ae8ebf18f35dcc09cec26c79f5e7b87893fbb9f28e913d054e7f279031da9: Status 404 returned error can't find the container with id cc0ae8ebf18f35dcc09cec26c79f5e7b87893fbb9f28e913d054e7f279031da9 Feb 17 16:06:16 crc kubenswrapper[4808]: I0217 16:06:16.755878 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" event={"ID":"2b8a3138-8c3d-434b-9069-8cafc18a0111","Type":"ContainerStarted","Data":"cc0ae8ebf18f35dcc09cec26c79f5e7b87893fbb9f28e913d054e7f279031da9"} Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.592361 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.592421 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.592469 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.593059 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51dff3d704e9a98a9fc5f37394f1d0157cc8cebcc4571b1aa78c7b9262eeb36c"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.593109 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://51dff3d704e9a98a9fc5f37394f1d0157cc8cebcc4571b1aa78c7b9262eeb36c" gracePeriod=600 Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.796282 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="51dff3d704e9a98a9fc5f37394f1d0157cc8cebcc4571b1aa78c7b9262eeb36c" exitCode=0 Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.796326 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"51dff3d704e9a98a9fc5f37394f1d0157cc8cebcc4571b1aa78c7b9262eeb36c"} Feb 17 16:06:21 crc kubenswrapper[4808]: I0217 16:06:21.796357 4808 scope.go:117] "RemoveContainer" containerID="088a965aa6da48d3335f0fd7b3ea4dc5ac44753ad3722fc3086c2312ec7c03db" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.815093 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" event={"ID":"b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab","Type":"ContainerStarted","Data":"8ac777c99872f45b25a038f193252f3ffa545029acd4e9f5bd4fb467aa7397f2"} Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.815624 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.817177 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" event={"ID":"2b8a3138-8c3d-434b-9069-8cafc18a0111","Type":"ContainerStarted","Data":"0c737d97005027182cee956998bce1cc09e0e41efcdf257112ad80295357b063"} Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.819908 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" event={"ID":"6d2656af-cd69-49ff-8d35-7c81fa4c4693","Type":"ContainerStarted","Data":"98e09363b1bb0f0a86eae5e4462dd49cf323aef6acfb9841f69bac483cb8fe03"} Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.822346 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" event={"ID":"038219cb-02e4-4451-b0d4-3e6af1518769","Type":"ContainerStarted","Data":"6176baeb8348833598843dee63a35c5629f6ddbd0a35d4dff740d9c4accddfdb"} Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.826092 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"284430f1fb330ef6ae53b6d6dd49c2af767ae61ae02d682d5cba6dbd7c4ce02d"} Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.830102 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" event={"ID":"c7703980-a631-414f-b3fc-a76dfdd1e085","Type":"ContainerStarted","Data":"3da5b1ba6353f511635696dc8f27ed1b144f737a18540f1a3d1a058357382927"} Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.830506 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.855025 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" podStartSLOduration=20.573321866 podStartE2EDuration="29.855003579s" podCreationTimestamp="2026-02-17 16:05:54 +0000 UTC" firstStartedPulling="2026-02-17 16:06:13.346356187 +0000 UTC m=+736.862715260" lastFinishedPulling="2026-02-17 16:06:22.6280379 +0000 UTC m=+746.144396973" observedRunningTime="2026-02-17 16:06:23.848393599 +0000 UTC m=+747.364752702" watchObservedRunningTime="2026-02-17 16:06:23.855003579 +0000 UTC m=+747.371362692" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.867968 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2q7qz" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.876479 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.927303 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5" podStartSLOduration=23.754188817 podStartE2EDuration="29.927283573s" podCreationTimestamp="2026-02-17 16:05:54 +0000 UTC" firstStartedPulling="2026-02-17 16:06:16.475823251 +0000 UTC m=+739.992182324" lastFinishedPulling="2026-02-17 16:06:22.648918007 +0000 UTC m=+746.165277080" observedRunningTime="2026-02-17 16:06:23.886541986 +0000 UTC m=+747.402901099" watchObservedRunningTime="2026-02-17 16:06:23.927283573 +0000 UTC m=+747.443642656" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.943489 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-lshnf" podStartSLOduration=19.796835357 podStartE2EDuration="29.943467432s" podCreationTimestamp="2026-02-17 16:05:54 +0000 UTC" firstStartedPulling="2026-02-17 16:06:12.481330863 +0000 UTC m=+735.997689936" lastFinishedPulling="2026-02-17 16:06:22.627962948 +0000 UTC m=+746.144322011" observedRunningTime="2026-02-17 16:06:23.939750042 +0000 UTC m=+747.456109155" watchObservedRunningTime="2026-02-17 16:06:23.943467432 +0000 UTC m=+747.459826515" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.968599 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24" podStartSLOduration=19.757607871 podStartE2EDuration="29.968561974s" podCreationTimestamp="2026-02-17 16:05:54 +0000 UTC" firstStartedPulling="2026-02-17 16:06:12.437565843 +0000 UTC m=+735.953924916" lastFinishedPulling="2026-02-17 16:06:22.648519946 +0000 UTC m=+746.164879019" observedRunningTime="2026-02-17 16:06:23.962488659 +0000 UTC m=+747.478847752" watchObservedRunningTime="2026-02-17 16:06:23.968561974 +0000 UTC m=+747.484921067" Feb 17 16:06:23 crc kubenswrapper[4808]: I0217 16:06:23.993001 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-7nl9q" podStartSLOduration=21.761054609 podStartE2EDuration="29.992980808s" podCreationTimestamp="2026-02-17 16:05:54 +0000 UTC" firstStartedPulling="2026-02-17 16:06:14.395971817 +0000 UTC m=+737.912330890" lastFinishedPulling="2026-02-17 16:06:22.627898016 +0000 UTC m=+746.144257089" observedRunningTime="2026-02-17 16:06:23.989690279 +0000 UTC m=+747.506049362" watchObservedRunningTime="2026-02-17 16:06:23.992980808 +0000 UTC m=+747.509339891" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.733268 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9"] Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.734521 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.739197 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.739249 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.743732 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-2mptt"] Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.744537 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2mptt" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.745063 4808 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4fddd" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.749719 4808 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jrc9v" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.756161 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9"] Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.761622 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2mptt"] Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.773623 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dgw65"] Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.774353 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.779395 4808 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-r4gtf" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.782101 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25qc2\" (UniqueName: \"kubernetes.io/projected/5bcb3c4d-b451-49ff-87b7-7b95830c0628-kube-api-access-25qc2\") pod \"cert-manager-webhook-687f57d79b-dgw65\" (UID: \"5bcb3c4d-b451-49ff-87b7-7b95830c0628\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.782180 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6ss2\" (UniqueName: \"kubernetes.io/projected/f70c72b0-4029-491f-b93e-4b4e52c5bf77-kube-api-access-r6ss2\") pod \"cert-manager-cainjector-cf98fcc89-cjbd9\" (UID: \"f70c72b0-4029-491f-b93e-4b4e52c5bf77\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.782256 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8wh9\" (UniqueName: \"kubernetes.io/projected/e17861f0-9138-4fa1-8fa0-7bd761f1e1bd-kube-api-access-s8wh9\") pod \"cert-manager-858654f9db-2mptt\" (UID: \"e17861f0-9138-4fa1-8fa0-7bd761f1e1bd\") " pod="cert-manager/cert-manager-858654f9db-2mptt" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.799117 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dgw65"] Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.885110 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25qc2\" (UniqueName: \"kubernetes.io/projected/5bcb3c4d-b451-49ff-87b7-7b95830c0628-kube-api-access-25qc2\") pod \"cert-manager-webhook-687f57d79b-dgw65\" (UID: \"5bcb3c4d-b451-49ff-87b7-7b95830c0628\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.885211 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6ss2\" (UniqueName: \"kubernetes.io/projected/f70c72b0-4029-491f-b93e-4b4e52c5bf77-kube-api-access-r6ss2\") pod \"cert-manager-cainjector-cf98fcc89-cjbd9\" (UID: \"f70c72b0-4029-491f-b93e-4b4e52c5bf77\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.885286 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8wh9\" (UniqueName: \"kubernetes.io/projected/e17861f0-9138-4fa1-8fa0-7bd761f1e1bd-kube-api-access-s8wh9\") pod \"cert-manager-858654f9db-2mptt\" (UID: \"e17861f0-9138-4fa1-8fa0-7bd761f1e1bd\") " pod="cert-manager/cert-manager-858654f9db-2mptt" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.910636 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6ss2\" (UniqueName: \"kubernetes.io/projected/f70c72b0-4029-491f-b93e-4b4e52c5bf77-kube-api-access-r6ss2\") pod \"cert-manager-cainjector-cf98fcc89-cjbd9\" (UID: \"f70c72b0-4029-491f-b93e-4b4e52c5bf77\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.911602 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25qc2\" (UniqueName: \"kubernetes.io/projected/5bcb3c4d-b451-49ff-87b7-7b95830c0628-kube-api-access-25qc2\") pod \"cert-manager-webhook-687f57d79b-dgw65\" (UID: \"5bcb3c4d-b451-49ff-87b7-7b95830c0628\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" Feb 17 16:06:32 crc kubenswrapper[4808]: I0217 16:06:32.925897 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8wh9\" (UniqueName: \"kubernetes.io/projected/e17861f0-9138-4fa1-8fa0-7bd761f1e1bd-kube-api-access-s8wh9\") pod \"cert-manager-858654f9db-2mptt\" (UID: \"e17861f0-9138-4fa1-8fa0-7bd761f1e1bd\") " pod="cert-manager/cert-manager-858654f9db-2mptt" Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.059425 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.068900 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2mptt" Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.090766 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.349961 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2mptt"] Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.391964 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9"] Feb 17 16:06:33 crc kubenswrapper[4808]: W0217 16:06:33.392326 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf70c72b0_4029_491f_b93e_4b4e52c5bf77.slice/crio-a3912113415609af6197241c1726c114844023979ff7ce3cfd64117095345979 WatchSource:0}: Error finding container a3912113415609af6197241c1726c114844023979ff7ce3cfd64117095345979: Status 404 returned error can't find the container with id a3912113415609af6197241c1726c114844023979ff7ce3cfd64117095345979 Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.416525 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dgw65"] Feb 17 16:06:33 crc kubenswrapper[4808]: W0217 16:06:33.417475 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bcb3c4d_b451_49ff_87b7_7b95830c0628.slice/crio-27aa8e2d871f29d9c3447647b4367cdfd0164bd440a6229a2a49c196a671fd0a WatchSource:0}: Error finding container 27aa8e2d871f29d9c3447647b4367cdfd0164bd440a6229a2a49c196a671fd0a: Status 404 returned error can't find the container with id 27aa8e2d871f29d9c3447647b4367cdfd0164bd440a6229a2a49c196a671fd0a Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.906665 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2mptt" event={"ID":"e17861f0-9138-4fa1-8fa0-7bd761f1e1bd","Type":"ContainerStarted","Data":"d4731825c937cb528d1f743ecd654c596e6dc8dd3d59ccc73a12daad262f2d6e"} Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.909836 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" event={"ID":"5bcb3c4d-b451-49ff-87b7-7b95830c0628","Type":"ContainerStarted","Data":"27aa8e2d871f29d9c3447647b4367cdfd0164bd440a6229a2a49c196a671fd0a"} Feb 17 16:06:33 crc kubenswrapper[4808]: I0217 16:06:33.914662 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" event={"ID":"f70c72b0-4029-491f-b93e-4b4e52c5bf77","Type":"ContainerStarted","Data":"a3912113415609af6197241c1726c114844023979ff7ce3cfd64117095345979"} Feb 17 16:06:35 crc kubenswrapper[4808]: I0217 16:06:35.342943 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-pkvl8" Feb 17 16:06:38 crc kubenswrapper[4808]: I0217 16:06:38.953303 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" event={"ID":"5bcb3c4d-b451-49ff-87b7-7b95830c0628","Type":"ContainerStarted","Data":"36a1cf2ddc7cf09feea6f0227066f9fdd5073a3e1abd24f39c2bfb6af9e0f434"} Feb 17 16:06:38 crc kubenswrapper[4808]: I0217 16:06:38.953995 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" Feb 17 16:06:38 crc kubenswrapper[4808]: I0217 16:06:38.954584 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" event={"ID":"f70c72b0-4029-491f-b93e-4b4e52c5bf77","Type":"ContainerStarted","Data":"0c4db39151f8ef5adecf6fdab35766e0051d8c4a640dbfd1abdb8974fdcfa643"} Feb 17 16:06:38 crc kubenswrapper[4808]: I0217 16:06:38.955815 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2mptt" event={"ID":"e17861f0-9138-4fa1-8fa0-7bd761f1e1bd","Type":"ContainerStarted","Data":"68e5ae3c31d44d177a2b5748c59eb12216a5ecae434961cdc32253d2e28fd647"} Feb 17 16:06:38 crc kubenswrapper[4808]: I0217 16:06:38.982644 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" podStartSLOduration=2.224439618 podStartE2EDuration="6.982622548s" podCreationTimestamp="2026-02-17 16:06:32 +0000 UTC" firstStartedPulling="2026-02-17 16:06:33.419587749 +0000 UTC m=+756.935946822" lastFinishedPulling="2026-02-17 16:06:38.177770669 +0000 UTC m=+761.694129752" observedRunningTime="2026-02-17 16:06:38.980728796 +0000 UTC m=+762.497087869" watchObservedRunningTime="2026-02-17 16:06:38.982622548 +0000 UTC m=+762.498981621" Feb 17 16:06:39 crc kubenswrapper[4808]: I0217 16:06:39.010494 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cjbd9" podStartSLOduration=2.035282049 podStartE2EDuration="7.010466065s" podCreationTimestamp="2026-02-17 16:06:32 +0000 UTC" firstStartedPulling="2026-02-17 16:06:33.394679162 +0000 UTC m=+756.911038235" lastFinishedPulling="2026-02-17 16:06:38.369863178 +0000 UTC m=+761.886222251" observedRunningTime="2026-02-17 16:06:39.005253063 +0000 UTC m=+762.521612136" watchObservedRunningTime="2026-02-17 16:06:39.010466065 +0000 UTC m=+762.526825138" Feb 17 16:06:39 crc kubenswrapper[4808]: I0217 16:06:39.042440 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-2mptt" podStartSLOduration=2.214478487 podStartE2EDuration="7.042416933s" podCreationTimestamp="2026-02-17 16:06:32 +0000 UTC" firstStartedPulling="2026-02-17 16:06:33.349652888 +0000 UTC m=+756.866011961" lastFinishedPulling="2026-02-17 16:06:38.177591334 +0000 UTC m=+761.693950407" observedRunningTime="2026-02-17 16:06:39.029670427 +0000 UTC m=+762.546029500" watchObservedRunningTime="2026-02-17 16:06:39.042416933 +0000 UTC m=+762.558776006" Feb 17 16:06:39 crc kubenswrapper[4808]: I0217 16:06:39.814705 4808 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 17 16:06:43 crc kubenswrapper[4808]: I0217 16:06:43.094779 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-dgw65" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.378106 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hcq8m"] Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.379629 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.388255 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hcq8m"] Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.474212 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scd7n\" (UniqueName: \"kubernetes.io/projected/269e3307-558f-4451-bf67-eb8e9be6237f-kube-api-access-scd7n\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.474278 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-catalog-content\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.474453 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-utilities\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.576511 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-catalog-content\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.576606 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-utilities\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.576676 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scd7n\" (UniqueName: \"kubernetes.io/projected/269e3307-558f-4451-bf67-eb8e9be6237f-kube-api-access-scd7n\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.577051 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-catalog-content\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.577152 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-utilities\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.597041 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scd7n\" (UniqueName: \"kubernetes.io/projected/269e3307-558f-4451-bf67-eb8e9be6237f-kube-api-access-scd7n\") pod \"community-operators-hcq8m\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:46 crc kubenswrapper[4808]: I0217 16:06:46.696670 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:47 crc kubenswrapper[4808]: I0217 16:06:47.027084 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hcq8m"] Feb 17 16:06:48 crc kubenswrapper[4808]: I0217 16:06:48.016293 4808 generic.go:334] "Generic (PLEG): container finished" podID="269e3307-558f-4451-bf67-eb8e9be6237f" containerID="af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587" exitCode=0 Feb 17 16:06:48 crc kubenswrapper[4808]: I0217 16:06:48.016376 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcq8m" event={"ID":"269e3307-558f-4451-bf67-eb8e9be6237f","Type":"ContainerDied","Data":"af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587"} Feb 17 16:06:48 crc kubenswrapper[4808]: I0217 16:06:48.016425 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcq8m" event={"ID":"269e3307-558f-4451-bf67-eb8e9be6237f","Type":"ContainerStarted","Data":"1fd2bf092e18b776d10b7a03c35f6845c7dfb7c5d54cda2b4dcb7c0f8b0de573"} Feb 17 16:06:52 crc kubenswrapper[4808]: I0217 16:06:52.042598 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcq8m" event={"ID":"269e3307-558f-4451-bf67-eb8e9be6237f","Type":"ContainerStarted","Data":"6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3"} Feb 17 16:06:53 crc kubenswrapper[4808]: I0217 16:06:53.052084 4808 generic.go:334] "Generic (PLEG): container finished" podID="269e3307-558f-4451-bf67-eb8e9be6237f" containerID="6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3" exitCode=0 Feb 17 16:06:53 crc kubenswrapper[4808]: I0217 16:06:53.052156 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcq8m" event={"ID":"269e3307-558f-4451-bf67-eb8e9be6237f","Type":"ContainerDied","Data":"6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3"} Feb 17 16:06:54 crc kubenswrapper[4808]: I0217 16:06:54.066962 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcq8m" event={"ID":"269e3307-558f-4451-bf67-eb8e9be6237f","Type":"ContainerStarted","Data":"567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b"} Feb 17 16:06:56 crc kubenswrapper[4808]: I0217 16:06:56.697259 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:56 crc kubenswrapper[4808]: I0217 16:06:56.697609 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:56 crc kubenswrapper[4808]: I0217 16:06:56.739921 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:06:56 crc kubenswrapper[4808]: I0217 16:06:56.757413 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hcq8m" podStartSLOduration=4.953725329 podStartE2EDuration="10.757393246s" podCreationTimestamp="2026-02-17 16:06:46 +0000 UTC" firstStartedPulling="2026-02-17 16:06:48.018893583 +0000 UTC m=+771.535252686" lastFinishedPulling="2026-02-17 16:06:53.8225615 +0000 UTC m=+777.338920603" observedRunningTime="2026-02-17 16:06:54.08864401 +0000 UTC m=+777.605003123" watchObservedRunningTime="2026-02-17 16:06:56.757393246 +0000 UTC m=+780.273752339" Feb 17 16:07:06 crc kubenswrapper[4808]: I0217 16:07:06.766489 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:07:06 crc kubenswrapper[4808]: I0217 16:07:06.840157 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hcq8m"] Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.166883 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hcq8m" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="registry-server" containerID="cri-o://567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b" gracePeriod=2 Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.546401 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.585738 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scd7n\" (UniqueName: \"kubernetes.io/projected/269e3307-558f-4451-bf67-eb8e9be6237f-kube-api-access-scd7n\") pod \"269e3307-558f-4451-bf67-eb8e9be6237f\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.585875 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-utilities\") pod \"269e3307-558f-4451-bf67-eb8e9be6237f\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.585950 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-catalog-content\") pod \"269e3307-558f-4451-bf67-eb8e9be6237f\" (UID: \"269e3307-558f-4451-bf67-eb8e9be6237f\") " Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.587414 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-utilities" (OuterVolumeSpecName: "utilities") pod "269e3307-558f-4451-bf67-eb8e9be6237f" (UID: "269e3307-558f-4451-bf67-eb8e9be6237f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.597200 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/269e3307-558f-4451-bf67-eb8e9be6237f-kube-api-access-scd7n" (OuterVolumeSpecName: "kube-api-access-scd7n") pod "269e3307-558f-4451-bf67-eb8e9be6237f" (UID: "269e3307-558f-4451-bf67-eb8e9be6237f"). InnerVolumeSpecName "kube-api-access-scd7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.637931 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "269e3307-558f-4451-bf67-eb8e9be6237f" (UID: "269e3307-558f-4451-bf67-eb8e9be6237f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.678334 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz"] Feb 17 16:07:07 crc kubenswrapper[4808]: E0217 16:07:07.678742 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="registry-server" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.678774 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="registry-server" Feb 17 16:07:07 crc kubenswrapper[4808]: E0217 16:07:07.678794 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="extract-utilities" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.678807 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="extract-utilities" Feb 17 16:07:07 crc kubenswrapper[4808]: E0217 16:07:07.678835 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="extract-content" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.678847 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="extract-content" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.679011 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" containerName="registry-server" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.680336 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.682191 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.687114 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.687152 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269e3307-558f-4451-bf67-eb8e9be6237f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.687166 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scd7n\" (UniqueName: \"kubernetes.io/projected/269e3307-558f-4451-bf67-eb8e9be6237f-kube-api-access-scd7n\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.692609 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz"] Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.788983 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.789039 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.789104 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6j6d\" (UniqueName: \"kubernetes.io/projected/da4f14dc-179d-4178-9a9c-747ab825f3e4-kube-api-access-h6j6d\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.889927 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.889988 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.890043 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6j6d\" (UniqueName: \"kubernetes.io/projected/da4f14dc-179d-4178-9a9c-747ab825f3e4-kube-api-access-h6j6d\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.890631 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.890944 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:07 crc kubenswrapper[4808]: I0217 16:07:07.911318 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6j6d\" (UniqueName: \"kubernetes.io/projected/da4f14dc-179d-4178-9a9c-747ab825f3e4-kube-api-access-h6j6d\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.004190 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.177300 4808 generic.go:334] "Generic (PLEG): container finished" podID="269e3307-558f-4451-bf67-eb8e9be6237f" containerID="567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b" exitCode=0 Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.177393 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcq8m" event={"ID":"269e3307-558f-4451-bf67-eb8e9be6237f","Type":"ContainerDied","Data":"567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b"} Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.177422 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hcq8m" event={"ID":"269e3307-558f-4451-bf67-eb8e9be6237f","Type":"ContainerDied","Data":"1fd2bf092e18b776d10b7a03c35f6845c7dfb7c5d54cda2b4dcb7c0f8b0de573"} Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.177440 4808 scope.go:117] "RemoveContainer" containerID="567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.177483 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hcq8m" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.206889 4808 scope.go:117] "RemoveContainer" containerID="6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.211009 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hcq8m"] Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.213379 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hcq8m"] Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.231522 4808 scope.go:117] "RemoveContainer" containerID="af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.232741 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz"] Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.249287 4808 scope.go:117] "RemoveContainer" containerID="567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b" Feb 17 16:07:08 crc kubenswrapper[4808]: E0217 16:07:08.249730 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b\": container with ID starting with 567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b not found: ID does not exist" containerID="567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.249789 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b"} err="failed to get container status \"567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b\": rpc error: code = NotFound desc = could not find container \"567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b\": container with ID starting with 567ccf9b348817541d64c9c0d47904ae360ad809841f67f8b47d370c74c2890b not found: ID does not exist" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.249818 4808 scope.go:117] "RemoveContainer" containerID="6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3" Feb 17 16:07:08 crc kubenswrapper[4808]: E0217 16:07:08.250230 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3\": container with ID starting with 6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3 not found: ID does not exist" containerID="6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.250252 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3"} err="failed to get container status \"6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3\": rpc error: code = NotFound desc = could not find container \"6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3\": container with ID starting with 6732b31f19f9917ed5b7e9a5e17b2a7cdea0ad7e072c62eed971fc3ab3ba2cd3 not found: ID does not exist" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.250268 4808 scope.go:117] "RemoveContainer" containerID="af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587" Feb 17 16:07:08 crc kubenswrapper[4808]: E0217 16:07:08.250607 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587\": container with ID starting with af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587 not found: ID does not exist" containerID="af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587" Feb 17 16:07:08 crc kubenswrapper[4808]: I0217 16:07:08.250623 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587"} err="failed to get container status \"af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587\": rpc error: code = NotFound desc = could not find container \"af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587\": container with ID starting with af4b617fa0a9e93e637d807f206e575d6517f5e1d1a1ce815f1a1f35fca1c587 not found: ID does not exist" Feb 17 16:07:09 crc kubenswrapper[4808]: I0217 16:07:09.158916 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="269e3307-558f-4451-bf67-eb8e9be6237f" path="/var/lib/kubelet/pods/269e3307-558f-4451-bf67-eb8e9be6237f/volumes" Feb 17 16:07:09 crc kubenswrapper[4808]: I0217 16:07:09.187282 4808 generic.go:334] "Generic (PLEG): container finished" podID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerID="82792d966d5393ffaa4332aea9a17514adac42b7cc94afea4847c0cb7c99de4f" exitCode=0 Feb 17 16:07:09 crc kubenswrapper[4808]: I0217 16:07:09.187354 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" event={"ID":"da4f14dc-179d-4178-9a9c-747ab825f3e4","Type":"ContainerDied","Data":"82792d966d5393ffaa4332aea9a17514adac42b7cc94afea4847c0cb7c99de4f"} Feb 17 16:07:09 crc kubenswrapper[4808]: I0217 16:07:09.187396 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" event={"ID":"da4f14dc-179d-4178-9a9c-747ab825f3e4","Type":"ContainerStarted","Data":"5679c07336490e02ce9e6644859a6efa88ccfa9a9e2b80bfb7f81039ee25987b"} Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.217159 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.217879 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.220509 4808 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-26fhb" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.220991 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.226041 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.230148 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.323037 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqdmn\" (UniqueName: \"kubernetes.io/projected/e722f9d4-4e9f-4cb6-bed6-59c141dffcb6-kube-api-access-zqdmn\") pod \"minio\" (UID: \"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6\") " pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.323095 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\") pod \"minio\" (UID: \"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6\") " pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.424179 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqdmn\" (UniqueName: \"kubernetes.io/projected/e722f9d4-4e9f-4cb6-bed6-59c141dffcb6-kube-api-access-zqdmn\") pod \"minio\" (UID: \"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6\") " pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.424261 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\") pod \"minio\" (UID: \"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6\") " pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.430651 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.430726 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\") pod \"minio\" (UID: \"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/951df6a420e49aca90444f5a4550c43a8d1257bfc8291118537598533d0c9023/globalmount\"" pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.466246 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqdmn\" (UniqueName: \"kubernetes.io/projected/e722f9d4-4e9f-4cb6-bed6-59c141dffcb6-kube-api-access-zqdmn\") pod \"minio\" (UID: \"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6\") " pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.472479 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-abf4a987-c4f6-472c-8f72-ed6151cc0597\") pod \"minio\" (UID: \"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6\") " pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.542185 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 17 16:07:10 crc kubenswrapper[4808]: I0217 16:07:10.761713 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 17 16:07:10 crc kubenswrapper[4808]: W0217 16:07:10.765211 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode722f9d4_4e9f_4cb6_bed6_59c141dffcb6.slice/crio-bf92541f1a9427958953ce47848753035414d23e220fe7d2f6a583f5b250056e WatchSource:0}: Error finding container bf92541f1a9427958953ce47848753035414d23e220fe7d2f6a583f5b250056e: Status 404 returned error can't find the container with id bf92541f1a9427958953ce47848753035414d23e220fe7d2f6a583f5b250056e Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.201405 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6","Type":"ContainerStarted","Data":"bf92541f1a9427958953ce47848753035414d23e220fe7d2f6a583f5b250056e"} Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.203719 4808 generic.go:334] "Generic (PLEG): container finished" podID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerID="cb0b48b5a25cf604e7682c779b1d79f2d82c02abe4339836e41cde853024f884" exitCode=0 Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.203754 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" event={"ID":"da4f14dc-179d-4178-9a9c-747ab825f3e4","Type":"ContainerDied","Data":"cb0b48b5a25cf604e7682c779b1d79f2d82c02abe4339836e41cde853024f884"} Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.217467 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gqxh7"] Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.218523 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.238842 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqxh7"] Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.244560 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-catalog-content\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.245040 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4jgp\" (UniqueName: \"kubernetes.io/projected/a78a92b2-62a6-4695-8363-7585b9131e18-kube-api-access-s4jgp\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.245141 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-utilities\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.345777 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-catalog-content\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.345830 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4jgp\" (UniqueName: \"kubernetes.io/projected/a78a92b2-62a6-4695-8363-7585b9131e18-kube-api-access-s4jgp\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.345859 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-utilities\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.346354 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-utilities\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.346413 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-catalog-content\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.374102 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4jgp\" (UniqueName: \"kubernetes.io/projected/a78a92b2-62a6-4695-8363-7585b9131e18-kube-api-access-s4jgp\") pod \"redhat-operators-gqxh7\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:11 crc kubenswrapper[4808]: I0217 16:07:11.549084 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:12 crc kubenswrapper[4808]: I0217 16:07:12.006005 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqxh7"] Feb 17 16:07:12 crc kubenswrapper[4808]: I0217 16:07:12.213647 4808 generic.go:334] "Generic (PLEG): container finished" podID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerID="c5e8563e9f798c18d0db5fdb9fe721f12311862ab9c9c89d722f9e1221976b26" exitCode=0 Feb 17 16:07:12 crc kubenswrapper[4808]: I0217 16:07:12.213688 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" event={"ID":"da4f14dc-179d-4178-9a9c-747ab825f3e4","Type":"ContainerDied","Data":"c5e8563e9f798c18d0db5fdb9fe721f12311862ab9c9c89d722f9e1221976b26"} Feb 17 16:07:13 crc kubenswrapper[4808]: I0217 16:07:13.220253 4808 generic.go:334] "Generic (PLEG): container finished" podID="a78a92b2-62a6-4695-8363-7585b9131e18" containerID="20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96" exitCode=0 Feb 17 16:07:13 crc kubenswrapper[4808]: I0217 16:07:13.220434 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqxh7" event={"ID":"a78a92b2-62a6-4695-8363-7585b9131e18","Type":"ContainerDied","Data":"20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96"} Feb 17 16:07:13 crc kubenswrapper[4808]: I0217 16:07:13.221528 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqxh7" event={"ID":"a78a92b2-62a6-4695-8363-7585b9131e18","Type":"ContainerStarted","Data":"a419479e130f9555fcdc8967da0620259792ea15250ef1b70571c1f01800c407"} Feb 17 16:07:13 crc kubenswrapper[4808]: I0217 16:07:13.900740 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.085493 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-util\") pod \"da4f14dc-179d-4178-9a9c-747ab825f3e4\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.085608 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-bundle\") pod \"da4f14dc-179d-4178-9a9c-747ab825f3e4\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.085658 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6j6d\" (UniqueName: \"kubernetes.io/projected/da4f14dc-179d-4178-9a9c-747ab825f3e4-kube-api-access-h6j6d\") pod \"da4f14dc-179d-4178-9a9c-747ab825f3e4\" (UID: \"da4f14dc-179d-4178-9a9c-747ab825f3e4\") " Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.086800 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-bundle" (OuterVolumeSpecName: "bundle") pod "da4f14dc-179d-4178-9a9c-747ab825f3e4" (UID: "da4f14dc-179d-4178-9a9c-747ab825f3e4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.097400 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-util" (OuterVolumeSpecName: "util") pod "da4f14dc-179d-4178-9a9c-747ab825f3e4" (UID: "da4f14dc-179d-4178-9a9c-747ab825f3e4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.098793 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da4f14dc-179d-4178-9a9c-747ab825f3e4-kube-api-access-h6j6d" (OuterVolumeSpecName: "kube-api-access-h6j6d") pod "da4f14dc-179d-4178-9a9c-747ab825f3e4" (UID: "da4f14dc-179d-4178-9a9c-747ab825f3e4"). InnerVolumeSpecName "kube-api-access-h6j6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.186980 4808 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.187079 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6j6d\" (UniqueName: \"kubernetes.io/projected/da4f14dc-179d-4178-9a9c-747ab825f3e4-kube-api-access-h6j6d\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.187094 4808 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da4f14dc-179d-4178-9a9c-747ab825f3e4-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.229493 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" event={"ID":"da4f14dc-179d-4178-9a9c-747ab825f3e4","Type":"ContainerDied","Data":"5679c07336490e02ce9e6644859a6efa88ccfa9a9e2b80bfb7f81039ee25987b"} Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.229530 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5679c07336490e02ce9e6644859a6efa88ccfa9a9e2b80bfb7f81039ee25987b" Feb 17 16:07:14 crc kubenswrapper[4808]: I0217 16:07:14.229646 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz" Feb 17 16:07:15 crc kubenswrapper[4808]: I0217 16:07:15.241098 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqxh7" event={"ID":"a78a92b2-62a6-4695-8363-7585b9131e18","Type":"ContainerStarted","Data":"fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45"} Feb 17 16:07:15 crc kubenswrapper[4808]: I0217 16:07:15.245673 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"e722f9d4-4e9f-4cb6-bed6-59c141dffcb6","Type":"ContainerStarted","Data":"62a147fb05d1f8af7a89e11b6938afed2a7e8fab9079bc0d38119bc3c0149235"} Feb 17 16:07:15 crc kubenswrapper[4808]: I0217 16:07:15.287358 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.568218926 podStartE2EDuration="8.287333993s" podCreationTimestamp="2026-02-17 16:07:07 +0000 UTC" firstStartedPulling="2026-02-17 16:07:10.767757337 +0000 UTC m=+794.284116410" lastFinishedPulling="2026-02-17 16:07:14.486872394 +0000 UTC m=+798.003231477" observedRunningTime="2026-02-17 16:07:15.283731876 +0000 UTC m=+798.800090999" watchObservedRunningTime="2026-02-17 16:07:15.287333993 +0000 UTC m=+798.803693076" Feb 17 16:07:16 crc kubenswrapper[4808]: I0217 16:07:16.256469 4808 generic.go:334] "Generic (PLEG): container finished" podID="a78a92b2-62a6-4695-8363-7585b9131e18" containerID="fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45" exitCode=0 Feb 17 16:07:16 crc kubenswrapper[4808]: I0217 16:07:16.256592 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqxh7" event={"ID":"a78a92b2-62a6-4695-8363-7585b9131e18","Type":"ContainerDied","Data":"fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45"} Feb 17 16:07:17 crc kubenswrapper[4808]: I0217 16:07:17.266158 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqxh7" event={"ID":"a78a92b2-62a6-4695-8363-7585b9131e18","Type":"ContainerStarted","Data":"bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f"} Feb 17 16:07:17 crc kubenswrapper[4808]: I0217 16:07:17.340015 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gqxh7" podStartSLOduration=3.464451665 podStartE2EDuration="6.339997899s" podCreationTimestamp="2026-02-17 16:07:11 +0000 UTC" firstStartedPulling="2026-02-17 16:07:13.835050002 +0000 UTC m=+797.351409075" lastFinishedPulling="2026-02-17 16:07:16.710596226 +0000 UTC m=+800.226955309" observedRunningTime="2026-02-17 16:07:17.333693508 +0000 UTC m=+800.850052581" watchObservedRunningTime="2026-02-17 16:07:17.339997899 +0000 UTC m=+800.856356972" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.739008 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj"] Feb 17 16:07:19 crc kubenswrapper[4808]: E0217 16:07:19.739552 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerName="util" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.739564 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerName="util" Feb 17 16:07:19 crc kubenswrapper[4808]: E0217 16:07:19.739599 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerName="pull" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.739605 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerName="pull" Feb 17 16:07:19 crc kubenswrapper[4808]: E0217 16:07:19.739614 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerName="extract" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.739621 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerName="extract" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.739737 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="da4f14dc-179d-4178-9a9c-747ab825f3e4" containerName="extract" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.740380 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.743791 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.743882 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.743925 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.745923 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.746796 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.757588 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-rnm4v" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.780324 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj"] Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.797477 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-apiservice-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.797554 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-webhook-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.797677 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.797783 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqvg5\" (UniqueName: \"kubernetes.io/projected/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-kube-api-access-tqvg5\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.797888 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-manager-config\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.898841 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-apiservice-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.898902 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-webhook-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.898928 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.898956 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqvg5\" (UniqueName: \"kubernetes.io/projected/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-kube-api-access-tqvg5\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.898994 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-manager-config\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.899900 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-manager-config\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.904737 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-webhook-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.904735 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.906192 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-apiservice-cert\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:19 crc kubenswrapper[4808]: I0217 16:07:19.917804 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqvg5\" (UniqueName: \"kubernetes.io/projected/fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec-kube-api-access-tqvg5\") pod \"loki-operator-controller-manager-85fb78767c-g2qqj\" (UID: \"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:20 crc kubenswrapper[4808]: I0217 16:07:20.060796 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:20 crc kubenswrapper[4808]: I0217 16:07:20.280765 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj"] Feb 17 16:07:20 crc kubenswrapper[4808]: I0217 16:07:20.301070 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" event={"ID":"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec","Type":"ContainerStarted","Data":"cfec0a27f8d32b6591fb291282a859313a8962cc233323b16ed723d7ade2cac8"} Feb 17 16:07:21 crc kubenswrapper[4808]: I0217 16:07:21.549355 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:21 crc kubenswrapper[4808]: I0217 16:07:21.549744 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:22 crc kubenswrapper[4808]: I0217 16:07:22.602776 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gqxh7" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="registry-server" probeResult="failure" output=< Feb 17 16:07:22 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 16:07:22 crc kubenswrapper[4808]: > Feb 17 16:07:26 crc kubenswrapper[4808]: I0217 16:07:26.351335 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" event={"ID":"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec","Type":"ContainerStarted","Data":"97ff72783f138b3f69f602096a300d9bbdb9f63954ae9d4d801b9b136080fbbc"} Feb 17 16:07:31 crc kubenswrapper[4808]: I0217 16:07:31.612517 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:31 crc kubenswrapper[4808]: I0217 16:07:31.670367 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:31 crc kubenswrapper[4808]: I0217 16:07:31.847116 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqxh7"] Feb 17 16:07:32 crc kubenswrapper[4808]: I0217 16:07:32.387036 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" event={"ID":"fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec","Type":"ContainerStarted","Data":"2bab7c8842c1ae4881cf254bb42d2d92593fdc5607b5097adfe47cdd1de7b485"} Feb 17 16:07:32 crc kubenswrapper[4808]: I0217 16:07:32.387812 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:32 crc kubenswrapper[4808]: I0217 16:07:32.391136 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" Feb 17 16:07:32 crc kubenswrapper[4808]: I0217 16:07:32.418466 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-85fb78767c-g2qqj" podStartSLOduration=1.550923957 podStartE2EDuration="13.418424242s" podCreationTimestamp="2026-02-17 16:07:19 +0000 UTC" firstStartedPulling="2026-02-17 16:07:20.293892423 +0000 UTC m=+803.810251496" lastFinishedPulling="2026-02-17 16:07:32.161392708 +0000 UTC m=+815.677751781" observedRunningTime="2026-02-17 16:07:32.414235098 +0000 UTC m=+815.930594181" watchObservedRunningTime="2026-02-17 16:07:32.418424242 +0000 UTC m=+815.934783335" Feb 17 16:07:33 crc kubenswrapper[4808]: I0217 16:07:33.395000 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gqxh7" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="registry-server" containerID="cri-o://bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f" gracePeriod=2 Feb 17 16:07:33 crc kubenswrapper[4808]: I0217 16:07:33.822774 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:33 crc kubenswrapper[4808]: I0217 16:07:33.993229 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4jgp\" (UniqueName: \"kubernetes.io/projected/a78a92b2-62a6-4695-8363-7585b9131e18-kube-api-access-s4jgp\") pod \"a78a92b2-62a6-4695-8363-7585b9131e18\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " Feb 17 16:07:33 crc kubenswrapper[4808]: I0217 16:07:33.993304 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-utilities\") pod \"a78a92b2-62a6-4695-8363-7585b9131e18\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " Feb 17 16:07:33 crc kubenswrapper[4808]: I0217 16:07:33.993374 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-catalog-content\") pod \"a78a92b2-62a6-4695-8363-7585b9131e18\" (UID: \"a78a92b2-62a6-4695-8363-7585b9131e18\") " Feb 17 16:07:33 crc kubenswrapper[4808]: I0217 16:07:33.994211 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-utilities" (OuterVolumeSpecName: "utilities") pod "a78a92b2-62a6-4695-8363-7585b9131e18" (UID: "a78a92b2-62a6-4695-8363-7585b9131e18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:33 crc kubenswrapper[4808]: I0217 16:07:33.999526 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a78a92b2-62a6-4695-8363-7585b9131e18-kube-api-access-s4jgp" (OuterVolumeSpecName: "kube-api-access-s4jgp") pod "a78a92b2-62a6-4695-8363-7585b9131e18" (UID: "a78a92b2-62a6-4695-8363-7585b9131e18"). InnerVolumeSpecName "kube-api-access-s4jgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.095177 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4jgp\" (UniqueName: \"kubernetes.io/projected/a78a92b2-62a6-4695-8363-7585b9131e18-kube-api-access-s4jgp\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.095226 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.114541 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a78a92b2-62a6-4695-8363-7585b9131e18" (UID: "a78a92b2-62a6-4695-8363-7585b9131e18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.196715 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78a92b2-62a6-4695-8363-7585b9131e18-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.402293 4808 generic.go:334] "Generic (PLEG): container finished" podID="a78a92b2-62a6-4695-8363-7585b9131e18" containerID="bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f" exitCode=0 Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.402363 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqxh7" event={"ID":"a78a92b2-62a6-4695-8363-7585b9131e18","Type":"ContainerDied","Data":"bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f"} Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.402391 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqxh7" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.402429 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqxh7" event={"ID":"a78a92b2-62a6-4695-8363-7585b9131e18","Type":"ContainerDied","Data":"a419479e130f9555fcdc8967da0620259792ea15250ef1b70571c1f01800c407"} Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.402449 4808 scope.go:117] "RemoveContainer" containerID="bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.421368 4808 scope.go:117] "RemoveContainer" containerID="fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.451252 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqxh7"] Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.456727 4808 scope.go:117] "RemoveContainer" containerID="20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.458000 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gqxh7"] Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.470754 4808 scope.go:117] "RemoveContainer" containerID="bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f" Feb 17 16:07:34 crc kubenswrapper[4808]: E0217 16:07:34.473697 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f\": container with ID starting with bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f not found: ID does not exist" containerID="bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.473740 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f"} err="failed to get container status \"bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f\": rpc error: code = NotFound desc = could not find container \"bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f\": container with ID starting with bfb452b12035a5fd06394af94686f8fe9c71aeb2ce1ecc7af97247031bc8365f not found: ID does not exist" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.473766 4808 scope.go:117] "RemoveContainer" containerID="fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45" Feb 17 16:07:34 crc kubenswrapper[4808]: E0217 16:07:34.474109 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45\": container with ID starting with fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45 not found: ID does not exist" containerID="fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.474165 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45"} err="failed to get container status \"fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45\": rpc error: code = NotFound desc = could not find container \"fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45\": container with ID starting with fafa3388f16d372afa05bf1e6edc88215825c2eed92931f869b65e0a268bbc45 not found: ID does not exist" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.474207 4808 scope.go:117] "RemoveContainer" containerID="20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96" Feb 17 16:07:34 crc kubenswrapper[4808]: E0217 16:07:34.474466 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96\": container with ID starting with 20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96 not found: ID does not exist" containerID="20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96" Feb 17 16:07:34 crc kubenswrapper[4808]: I0217 16:07:34.474486 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96"} err="failed to get container status \"20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96\": rpc error: code = NotFound desc = could not find container \"20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96\": container with ID starting with 20a3d1f808a67532d6a3df73638ee8fad690961583885e086547b65bb3334b96 not found: ID does not exist" Feb 17 16:07:35 crc kubenswrapper[4808]: I0217 16:07:35.157797 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" path="/var/lib/kubelet/pods/a78a92b2-62a6-4695-8363-7585b9131e18/volumes" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.616093 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl"] Feb 17 16:08:07 crc kubenswrapper[4808]: E0217 16:08:07.616842 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="extract-utilities" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.616853 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="extract-utilities" Feb 17 16:08:07 crc kubenswrapper[4808]: E0217 16:08:07.616869 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="registry-server" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.616875 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="registry-server" Feb 17 16:08:07 crc kubenswrapper[4808]: E0217 16:08:07.616886 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="extract-content" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.616892 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="extract-content" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.616993 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a78a92b2-62a6-4695-8363-7585b9131e18" containerName="registry-server" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.617808 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.620112 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.624426 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl"] Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.930609 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.930701 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmg5j\" (UniqueName: \"kubernetes.io/projected/5903df73-c7d6-46cf-8aa2-4f0067c08b99-kube-api-access-gmg5j\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:07 crc kubenswrapper[4808]: I0217 16:08:07.930793 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.031796 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.031870 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.031909 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmg5j\" (UniqueName: \"kubernetes.io/projected/5903df73-c7d6-46cf-8aa2-4f0067c08b99-kube-api-access-gmg5j\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.032351 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.032648 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.050753 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmg5j\" (UniqueName: \"kubernetes.io/projected/5903df73-c7d6-46cf-8aa2-4f0067c08b99-kube-api-access-gmg5j\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.249024 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.447445 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl"] Feb 17 16:08:08 crc kubenswrapper[4808]: I0217 16:08:08.944317 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" event={"ID":"5903df73-c7d6-46cf-8aa2-4f0067c08b99","Type":"ContainerStarted","Data":"74b894f184bb83b076cc8f257ea609aa7d7356620da3cd381d1989e96fd746cf"} Feb 17 16:08:09 crc kubenswrapper[4808]: I0217 16:08:09.953317 4808 generic.go:334] "Generic (PLEG): container finished" podID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerID="fdf9f991333fcecafde3c4ecc81e0edee4d4616057eb1fff2bed6420d00eea2b" exitCode=0 Feb 17 16:08:09 crc kubenswrapper[4808]: I0217 16:08:09.953393 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" event={"ID":"5903df73-c7d6-46cf-8aa2-4f0067c08b99","Type":"ContainerDied","Data":"fdf9f991333fcecafde3c4ecc81e0edee4d4616057eb1fff2bed6420d00eea2b"} Feb 17 16:08:11 crc kubenswrapper[4808]: I0217 16:08:11.969022 4808 generic.go:334] "Generic (PLEG): container finished" podID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerID="927cd9ed4f81fcf9ea82385b3d924f88e461102f00f2185156a7e4092db64b6a" exitCode=0 Feb 17 16:08:11 crc kubenswrapper[4808]: I0217 16:08:11.969207 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" event={"ID":"5903df73-c7d6-46cf-8aa2-4f0067c08b99","Type":"ContainerDied","Data":"927cd9ed4f81fcf9ea82385b3d924f88e461102f00f2185156a7e4092db64b6a"} Feb 17 16:08:12 crc kubenswrapper[4808]: I0217 16:08:12.986617 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" event={"ID":"5903df73-c7d6-46cf-8aa2-4f0067c08b99","Type":"ContainerDied","Data":"5b696282029bbdf5471ceac57569133dc5868022bf6e116af1b0d637d41ff5d7"} Feb 17 16:08:12 crc kubenswrapper[4808]: I0217 16:08:12.986559 4808 generic.go:334] "Generic (PLEG): container finished" podID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerID="5b696282029bbdf5471ceac57569133dc5868022bf6e116af1b0d637d41ff5d7" exitCode=0 Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.335812 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.477977 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmg5j\" (UniqueName: \"kubernetes.io/projected/5903df73-c7d6-46cf-8aa2-4f0067c08b99-kube-api-access-gmg5j\") pod \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.478171 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-util\") pod \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.478313 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-bundle\") pod \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\" (UID: \"5903df73-c7d6-46cf-8aa2-4f0067c08b99\") " Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.478808 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-bundle" (OuterVolumeSpecName: "bundle") pod "5903df73-c7d6-46cf-8aa2-4f0067c08b99" (UID: "5903df73-c7d6-46cf-8aa2-4f0067c08b99"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.484811 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5903df73-c7d6-46cf-8aa2-4f0067c08b99-kube-api-access-gmg5j" (OuterVolumeSpecName: "kube-api-access-gmg5j") pod "5903df73-c7d6-46cf-8aa2-4f0067c08b99" (UID: "5903df73-c7d6-46cf-8aa2-4f0067c08b99"). InnerVolumeSpecName "kube-api-access-gmg5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.580134 4808 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.580180 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmg5j\" (UniqueName: \"kubernetes.io/projected/5903df73-c7d6-46cf-8aa2-4f0067c08b99-kube-api-access-gmg5j\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.746164 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-util" (OuterVolumeSpecName: "util") pod "5903df73-c7d6-46cf-8aa2-4f0067c08b99" (UID: "5903df73-c7d6-46cf-8aa2-4f0067c08b99"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:08:14 crc kubenswrapper[4808]: I0217 16:08:14.782780 4808 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5903df73-c7d6-46cf-8aa2-4f0067c08b99-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:15 crc kubenswrapper[4808]: I0217 16:08:15.003024 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" event={"ID":"5903df73-c7d6-46cf-8aa2-4f0067c08b99","Type":"ContainerDied","Data":"74b894f184bb83b076cc8f257ea609aa7d7356620da3cd381d1989e96fd746cf"} Feb 17 16:08:15 crc kubenswrapper[4808]: I0217 16:08:15.003080 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74b894f184bb83b076cc8f257ea609aa7d7356620da3cd381d1989e96fd746cf" Feb 17 16:08:15 crc kubenswrapper[4808]: I0217 16:08:15.003129 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.714284 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bjzdq"] Feb 17 16:08:16 crc kubenswrapper[4808]: E0217 16:08:16.714540 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerName="util" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.714555 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerName="util" Feb 17 16:08:16 crc kubenswrapper[4808]: E0217 16:08:16.714566 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerName="extract" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.714599 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerName="extract" Feb 17 16:08:16 crc kubenswrapper[4808]: E0217 16:08:16.714615 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerName="pull" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.714627 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerName="pull" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.714768 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5903df73-c7d6-46cf-8aa2-4f0067c08b99" containerName="extract" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.715236 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.717700 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.718350 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.718350 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9sqg6" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.741562 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bjzdq"] Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.810495 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw8kr\" (UniqueName: \"kubernetes.io/projected/691d742f-d55e-48e4-89bc-7936f6b31f12-kube-api-access-qw8kr\") pod \"nmstate-operator-694c9596b7-bjzdq\" (UID: \"691d742f-d55e-48e4-89bc-7936f6b31f12\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.911531 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw8kr\" (UniqueName: \"kubernetes.io/projected/691d742f-d55e-48e4-89bc-7936f6b31f12-kube-api-access-qw8kr\") pod \"nmstate-operator-694c9596b7-bjzdq\" (UID: \"691d742f-d55e-48e4-89bc-7936f6b31f12\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" Feb 17 16:08:16 crc kubenswrapper[4808]: I0217 16:08:16.942204 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw8kr\" (UniqueName: \"kubernetes.io/projected/691d742f-d55e-48e4-89bc-7936f6b31f12-kube-api-access-qw8kr\") pod \"nmstate-operator-694c9596b7-bjzdq\" (UID: \"691d742f-d55e-48e4-89bc-7936f6b31f12\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" Feb 17 16:08:17 crc kubenswrapper[4808]: I0217 16:08:17.103322 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" Feb 17 16:08:17 crc kubenswrapper[4808]: I0217 16:08:17.423147 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bjzdq"] Feb 17 16:08:18 crc kubenswrapper[4808]: I0217 16:08:18.023786 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" event={"ID":"691d742f-d55e-48e4-89bc-7936f6b31f12","Type":"ContainerStarted","Data":"369b6c728989f36b73866d643238042a7890f00ca5c64336d2a8b9e3b8265cee"} Feb 17 16:08:20 crc kubenswrapper[4808]: I0217 16:08:20.035499 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" event={"ID":"691d742f-d55e-48e4-89bc-7936f6b31f12","Type":"ContainerStarted","Data":"6823c5483e3a0f31a02ad66732891203651922e948c6e6d64989a130cad26b65"} Feb 17 16:08:20 crc kubenswrapper[4808]: I0217 16:08:20.056969 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-bjzdq" podStartSLOduration=1.9167388650000001 podStartE2EDuration="4.05694338s" podCreationTimestamp="2026-02-17 16:08:16 +0000 UTC" firstStartedPulling="2026-02-17 16:08:17.422768778 +0000 UTC m=+860.939127861" lastFinishedPulling="2026-02-17 16:08:19.562973273 +0000 UTC m=+863.079332376" observedRunningTime="2026-02-17 16:08:20.049070378 +0000 UTC m=+863.565429461" watchObservedRunningTime="2026-02-17 16:08:20.05694338 +0000 UTC m=+863.573302493" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.042946 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.052342 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.054815 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-nsdgw" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.067220 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.068525 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.079055 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.085821 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.098927 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.126657 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-q5xs9"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.127821 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.184182 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9f2e1846-9112-48fb-b69e-0a12393c62e6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-vz75q\" (UID: \"9f2e1846-9112-48fb-b69e-0a12393c62e6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.184225 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfhnk\" (UniqueName: \"kubernetes.io/projected/56fb3ff0-71b6-4792-acdf-33edb0cb23b4-kube-api-access-nfhnk\") pod \"nmstate-metrics-58c85c668d-j8rw5\" (UID: \"56fb3ff0-71b6-4792-acdf-33edb0cb23b4\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.184248 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fxfp\" (UniqueName: \"kubernetes.io/projected/9f2e1846-9112-48fb-b69e-0a12393c62e6-kube-api-access-6fxfp\") pod \"nmstate-webhook-866bcb46dc-vz75q\" (UID: \"9f2e1846-9112-48fb-b69e-0a12393c62e6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.218074 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.218920 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.225033 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.225273 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.226283 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-h64qz" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.246131 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.285729 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-nmstate-lock\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.285821 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-ovs-socket\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.285901 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-dbus-socket\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.285939 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9f2e1846-9112-48fb-b69e-0a12393c62e6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-vz75q\" (UID: \"9f2e1846-9112-48fb-b69e-0a12393c62e6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.285968 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fxfp\" (UniqueName: \"kubernetes.io/projected/9f2e1846-9112-48fb-b69e-0a12393c62e6-kube-api-access-6fxfp\") pod \"nmstate-webhook-866bcb46dc-vz75q\" (UID: \"9f2e1846-9112-48fb-b69e-0a12393c62e6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.285997 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfhnk\" (UniqueName: \"kubernetes.io/projected/56fb3ff0-71b6-4792-acdf-33edb0cb23b4-kube-api-access-nfhnk\") pod \"nmstate-metrics-58c85c668d-j8rw5\" (UID: \"56fb3ff0-71b6-4792-acdf-33edb0cb23b4\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.286067 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cddt\" (UniqueName: \"kubernetes.io/projected/16498191-a001-4403-af35-b76104720e91-kube-api-access-9cddt\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.316089 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9f2e1846-9112-48fb-b69e-0a12393c62e6-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-vz75q\" (UID: \"9f2e1846-9112-48fb-b69e-0a12393c62e6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.326188 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fxfp\" (UniqueName: \"kubernetes.io/projected/9f2e1846-9112-48fb-b69e-0a12393c62e6-kube-api-access-6fxfp\") pod \"nmstate-webhook-866bcb46dc-vz75q\" (UID: \"9f2e1846-9112-48fb-b69e-0a12393c62e6\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.329456 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfhnk\" (UniqueName: \"kubernetes.io/projected/56fb3ff0-71b6-4792-acdf-33edb0cb23b4-kube-api-access-nfhnk\") pod \"nmstate-metrics-58c85c668d-j8rw5\" (UID: \"56fb3ff0-71b6-4792-acdf-33edb0cb23b4\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.387514 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cddt\" (UniqueName: \"kubernetes.io/projected/16498191-a001-4403-af35-b76104720e91-kube-api-access-9cddt\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388077 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-nmstate-lock\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388116 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf7bs\" (UniqueName: \"kubernetes.io/projected/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-kube-api-access-bf7bs\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388158 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-ovs-socket\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388185 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388237 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-dbus-socket\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388270 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388833 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-nmstate-lock\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.388888 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-ovs-socket\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.389189 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/16498191-a001-4403-af35-b76104720e91-dbus-socket\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.392480 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.402085 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.405557 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cddt\" (UniqueName: \"kubernetes.io/projected/16498191-a001-4403-af35-b76104720e91-kube-api-access-9cddt\") pod \"nmstate-handler-q5xs9\" (UID: \"16498191-a001-4403-af35-b76104720e91\") " pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.453045 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.470407 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-79ddccbf49-dhwd5"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.471419 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.480838 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79ddccbf49-dhwd5"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.495503 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf7bs\" (UniqueName: \"kubernetes.io/projected/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-kube-api-access-bf7bs\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.495608 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.495716 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: E0217 16:08:21.496014 4808 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 17 16:08:21 crc kubenswrapper[4808]: E0217 16:08:21.496095 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-plugin-serving-cert podName:2c731526-11bd-4ef9-bb62-eb3a0512ff1d nodeName:}" failed. No retries permitted until 2026-02-17 16:08:21.996069174 +0000 UTC m=+865.512428247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-48n66" (UID: "2c731526-11bd-4ef9-bb62-eb3a0512ff1d") : secret "plugin-serving-cert" not found Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.497540 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.529747 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf7bs\" (UniqueName: \"kubernetes.io/projected/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-kube-api-access-bf7bs\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:21 crc kubenswrapper[4808]: W0217 16:08:21.540884 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16498191_a001_4403_af35_b76104720e91.slice/crio-f3f3d4e51eb70ecd36943694b53b6fe16de56f082a2662b348f39fc036736fab WatchSource:0}: Error finding container f3f3d4e51eb70ecd36943694b53b6fe16de56f082a2662b348f39fc036736fab: Status 404 returned error can't find the container with id f3f3d4e51eb70ecd36943694b53b6fe16de56f082a2662b348f39fc036736fab Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.597693 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-trusted-ca-bundle\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.598458 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-oauth-serving-cert\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.598620 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-oauth-config\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.598698 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-serving-cert\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.598774 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbfpw\" (UniqueName: \"kubernetes.io/projected/c546f1bc-ad95-41f2-988e-23868a5ab5dd-kube-api-access-cbfpw\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.598853 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-config\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.598883 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-service-ca\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.667854 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.700272 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-oauth-serving-cert\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.700348 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-oauth-config\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.700400 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-serving-cert\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.700447 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbfpw\" (UniqueName: \"kubernetes.io/projected/c546f1bc-ad95-41f2-988e-23868a5ab5dd-kube-api-access-cbfpw\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.700493 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-config\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.700530 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-service-ca\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.700630 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-trusted-ca-bundle\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.702036 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-trusted-ca-bundle\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.702185 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-oauth-serving-cert\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.703213 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-service-ca\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.703225 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-config\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.707215 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-oauth-config\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.710203 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c546f1bc-ad95-41f2-988e-23868a5ab5dd-console-serving-cert\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.717754 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbfpw\" (UniqueName: \"kubernetes.io/projected/c546f1bc-ad95-41f2-988e-23868a5ab5dd-kube-api-access-cbfpw\") pod \"console-79ddccbf49-dhwd5\" (UID: \"c546f1bc-ad95-41f2-988e-23868a5ab5dd\") " pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.742015 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q"] Feb 17 16:08:21 crc kubenswrapper[4808]: I0217 16:08:21.801714 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.003950 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.008461 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c731526-11bd-4ef9-bb62-eb3a0512ff1d-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-48n66\" (UID: \"2c731526-11bd-4ef9-bb62-eb3a0512ff1d\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.033855 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79ddccbf49-dhwd5"] Feb 17 16:08:22 crc kubenswrapper[4808]: W0217 16:08:22.039494 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc546f1bc_ad95_41f2_988e_23868a5ab5dd.slice/crio-6a2c2be583e7fc4d4260010e22f703471a3cc6881ba1f1d18a9f2fc8c3f08ff3 WatchSource:0}: Error finding container 6a2c2be583e7fc4d4260010e22f703471a3cc6881ba1f1d18a9f2fc8c3f08ff3: Status 404 returned error can't find the container with id 6a2c2be583e7fc4d4260010e22f703471a3cc6881ba1f1d18a9f2fc8c3f08ff3 Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.056625 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79ddccbf49-dhwd5" event={"ID":"c546f1bc-ad95-41f2-988e-23868a5ab5dd","Type":"ContainerStarted","Data":"6a2c2be583e7fc4d4260010e22f703471a3cc6881ba1f1d18a9f2fc8c3f08ff3"} Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.058945 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" event={"ID":"9f2e1846-9112-48fb-b69e-0a12393c62e6","Type":"ContainerStarted","Data":"b2bf587e03f35613767dd4dab19285199930b9d831e0acb900993dc1090d0405"} Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.059822 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q5xs9" event={"ID":"16498191-a001-4403-af35-b76104720e91","Type":"ContainerStarted","Data":"f3f3d4e51eb70ecd36943694b53b6fe16de56f082a2662b348f39fc036736fab"} Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.060811 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" event={"ID":"56fb3ff0-71b6-4792-acdf-33edb0cb23b4","Type":"ContainerStarted","Data":"06a2f8058ee07cc6a58e11dd4d8e8b4d02e37fcc4b43fb38751ea191f8767c36"} Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.136092 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" Feb 17 16:08:22 crc kubenswrapper[4808]: I0217 16:08:22.605259 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66"] Feb 17 16:08:22 crc kubenswrapper[4808]: W0217 16:08:22.618371 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c731526_11bd_4ef9_bb62_eb3a0512ff1d.slice/crio-c153c68d2519e32f92364798f2da80c5e00d52f642655c80c136aefa4bf59114 WatchSource:0}: Error finding container c153c68d2519e32f92364798f2da80c5e00d52f642655c80c136aefa4bf59114: Status 404 returned error can't find the container with id c153c68d2519e32f92364798f2da80c5e00d52f642655c80c136aefa4bf59114 Feb 17 16:08:23 crc kubenswrapper[4808]: I0217 16:08:23.072920 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" event={"ID":"2c731526-11bd-4ef9-bb62-eb3a0512ff1d","Type":"ContainerStarted","Data":"c153c68d2519e32f92364798f2da80c5e00d52f642655c80c136aefa4bf59114"} Feb 17 16:08:23 crc kubenswrapper[4808]: I0217 16:08:23.074488 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79ddccbf49-dhwd5" event={"ID":"c546f1bc-ad95-41f2-988e-23868a5ab5dd","Type":"ContainerStarted","Data":"c7305c9670bf9677188ae30700d505027d60190bcc2199580b39ceb50994b7ba"} Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.088485 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" event={"ID":"56fb3ff0-71b6-4792-acdf-33edb0cb23b4","Type":"ContainerStarted","Data":"f0562f74d693ba6b0b6a602bb3975ed95eb9636ba5661ed4317dc335ad58c81a"} Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.090086 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" event={"ID":"9f2e1846-9112-48fb-b69e-0a12393c62e6","Type":"ContainerStarted","Data":"5586b0d7a9493a03c5037cb79dac2bce1b44f9432738c8b202515093df790730"} Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.090239 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.092118 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q5xs9" event={"ID":"16498191-a001-4403-af35-b76104720e91","Type":"ContainerStarted","Data":"a70b40991af1b76c8bcb0c03b7cd5e5719ac8cc120015d60128daf23f5eebc12"} Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.092251 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.107472 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" podStartSLOduration=1.5875267389999999 podStartE2EDuration="4.107423952s" podCreationTimestamp="2026-02-17 16:08:21 +0000 UTC" firstStartedPulling="2026-02-17 16:08:21.738944167 +0000 UTC m=+865.255303240" lastFinishedPulling="2026-02-17 16:08:24.25884138 +0000 UTC m=+867.775200453" observedRunningTime="2026-02-17 16:08:25.101825032 +0000 UTC m=+868.618184105" watchObservedRunningTime="2026-02-17 16:08:25.107423952 +0000 UTC m=+868.623783035" Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.112891 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-79ddccbf49-dhwd5" podStartSLOduration=4.112872259 podStartE2EDuration="4.112872259s" podCreationTimestamp="2026-02-17 16:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:08:23.092546144 +0000 UTC m=+866.608905217" watchObservedRunningTime="2026-02-17 16:08:25.112872259 +0000 UTC m=+868.629231352" Feb 17 16:08:25 crc kubenswrapper[4808]: I0217 16:08:25.125604 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-q5xs9" podStartSLOduration=1.462694416 podStartE2EDuration="4.125558269s" podCreationTimestamp="2026-02-17 16:08:21 +0000 UTC" firstStartedPulling="2026-02-17 16:08:21.553382723 +0000 UTC m=+865.069741796" lastFinishedPulling="2026-02-17 16:08:24.216246576 +0000 UTC m=+867.732605649" observedRunningTime="2026-02-17 16:08:25.11851814 +0000 UTC m=+868.634877223" watchObservedRunningTime="2026-02-17 16:08:25.125558269 +0000 UTC m=+868.641917382" Feb 17 16:08:27 crc kubenswrapper[4808]: I0217 16:08:27.116232 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" event={"ID":"2c731526-11bd-4ef9-bb62-eb3a0512ff1d","Type":"ContainerStarted","Data":"c385c0ad3263aa669d2ee036cf635c1fe4f9a5f1d34e898afa387b928eb4d0f7"} Feb 17 16:08:27 crc kubenswrapper[4808]: I0217 16:08:27.135709 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-48n66" podStartSLOduration=2.802678918 podStartE2EDuration="6.13568975s" podCreationTimestamp="2026-02-17 16:08:21 +0000 UTC" firstStartedPulling="2026-02-17 16:08:22.623392053 +0000 UTC m=+866.139751116" lastFinishedPulling="2026-02-17 16:08:25.956402875 +0000 UTC m=+869.472761948" observedRunningTime="2026-02-17 16:08:27.131552358 +0000 UTC m=+870.647911521" watchObservedRunningTime="2026-02-17 16:08:27.13568975 +0000 UTC m=+870.652048853" Feb 17 16:08:28 crc kubenswrapper[4808]: I0217 16:08:28.128894 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" event={"ID":"56fb3ff0-71b6-4792-acdf-33edb0cb23b4","Type":"ContainerStarted","Data":"26ea35ed20769ac33f935f55683b9f8b7d7629a05eccaa4080c9185da2abd222"} Feb 17 16:08:28 crc kubenswrapper[4808]: I0217 16:08:28.151324 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-j8rw5" podStartSLOduration=1.621908873 podStartE2EDuration="7.151296418s" podCreationTimestamp="2026-02-17 16:08:21 +0000 UTC" firstStartedPulling="2026-02-17 16:08:21.677142777 +0000 UTC m=+865.193501840" lastFinishedPulling="2026-02-17 16:08:27.206530312 +0000 UTC m=+870.722889385" observedRunningTime="2026-02-17 16:08:28.150665511 +0000 UTC m=+871.667024614" watchObservedRunningTime="2026-02-17 16:08:28.151296418 +0000 UTC m=+871.667655521" Feb 17 16:08:31 crc kubenswrapper[4808]: I0217 16:08:31.502367 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-q5xs9" Feb 17 16:08:31 crc kubenswrapper[4808]: I0217 16:08:31.801850 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:31 crc kubenswrapper[4808]: I0217 16:08:31.803009 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:31 crc kubenswrapper[4808]: I0217 16:08:31.811313 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:32 crc kubenswrapper[4808]: I0217 16:08:32.165706 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-79ddccbf49-dhwd5" Feb 17 16:08:32 crc kubenswrapper[4808]: I0217 16:08:32.252711 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hdg74"] Feb 17 16:08:41 crc kubenswrapper[4808]: I0217 16:08:41.411428 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-vz75q" Feb 17 16:08:51 crc kubenswrapper[4808]: I0217 16:08:51.592525 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:08:51 crc kubenswrapper[4808]: I0217 16:08:51.593822 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.141085 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw"] Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.143433 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.151629 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.164969 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw"] Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.243192 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.243400 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vz4b\" (UniqueName: \"kubernetes.io/projected/df1cf40f-e7a2-40b1-8adb-45d2b5205584-kube-api-access-8vz4b\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.243649 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.324080 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-hdg74" podUID="e489a46b-9123-44c6-94e0-692621760dd6" containerName="console" containerID="cri-o://5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a" gracePeriod=15 Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.345438 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.345535 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vz4b\" (UniqueName: \"kubernetes.io/projected/df1cf40f-e7a2-40b1-8adb-45d2b5205584-kube-api-access-8vz4b\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.345612 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.345953 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.346376 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.363833 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vz4b\" (UniqueName: \"kubernetes.io/projected/df1cf40f-e7a2-40b1-8adb-45d2b5205584-kube-api-access-8vz4b\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.484769 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.678344 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hdg74_e489a46b-9123-44c6-94e0-692621760dd6/console/0.log" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.678677 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.762721 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-service-ca\") pod \"e489a46b-9123-44c6-94e0-692621760dd6\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.762774 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-oauth-config\") pod \"e489a46b-9123-44c6-94e0-692621760dd6\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.762840 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-trusted-ca-bundle\") pod \"e489a46b-9123-44c6-94e0-692621760dd6\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.762894 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-oauth-serving-cert\") pod \"e489a46b-9123-44c6-94e0-692621760dd6\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.762926 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lnfm\" (UniqueName: \"kubernetes.io/projected/e489a46b-9123-44c6-94e0-692621760dd6-kube-api-access-6lnfm\") pod \"e489a46b-9123-44c6-94e0-692621760dd6\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.762976 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-serving-cert\") pod \"e489a46b-9123-44c6-94e0-692621760dd6\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.763028 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-console-config\") pod \"e489a46b-9123-44c6-94e0-692621760dd6\" (UID: \"e489a46b-9123-44c6-94e0-692621760dd6\") " Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.763776 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e489a46b-9123-44c6-94e0-692621760dd6" (UID: "e489a46b-9123-44c6-94e0-692621760dd6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.763811 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-service-ca" (OuterVolumeSpecName: "service-ca") pod "e489a46b-9123-44c6-94e0-692621760dd6" (UID: "e489a46b-9123-44c6-94e0-692621760dd6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.763858 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-console-config" (OuterVolumeSpecName: "console-config") pod "e489a46b-9123-44c6-94e0-692621760dd6" (UID: "e489a46b-9123-44c6-94e0-692621760dd6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.763919 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e489a46b-9123-44c6-94e0-692621760dd6" (UID: "e489a46b-9123-44c6-94e0-692621760dd6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.769255 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e489a46b-9123-44c6-94e0-692621760dd6" (UID: "e489a46b-9123-44c6-94e0-692621760dd6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.769367 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e489a46b-9123-44c6-94e0-692621760dd6" (UID: "e489a46b-9123-44c6-94e0-692621760dd6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.769374 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e489a46b-9123-44c6-94e0-692621760dd6-kube-api-access-6lnfm" (OuterVolumeSpecName: "kube-api-access-6lnfm") pod "e489a46b-9123-44c6-94e0-692621760dd6" (UID: "e489a46b-9123-44c6-94e0-692621760dd6"). InnerVolumeSpecName "kube-api-access-6lnfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.864451 4808 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-console-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.864499 4808 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.864515 4808 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-service-ca\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.864526 4808 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.864539 4808 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e489a46b-9123-44c6-94e0-692621760dd6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.864601 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lnfm\" (UniqueName: \"kubernetes.io/projected/e489a46b-9123-44c6-94e0-692621760dd6-kube-api-access-6lnfm\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.864614 4808 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e489a46b-9123-44c6-94e0-692621760dd6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 17 16:08:57 crc kubenswrapper[4808]: I0217 16:08:57.960807 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw"] Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.388960 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-hdg74_e489a46b-9123-44c6-94e0-692621760dd6/console/0.log" Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.389281 4808 generic.go:334] "Generic (PLEG): container finished" podID="e489a46b-9123-44c6-94e0-692621760dd6" containerID="5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a" exitCode=2 Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.389354 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-hdg74" Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.389373 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hdg74" event={"ID":"e489a46b-9123-44c6-94e0-692621760dd6","Type":"ContainerDied","Data":"5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a"} Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.389406 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-hdg74" event={"ID":"e489a46b-9123-44c6-94e0-692621760dd6","Type":"ContainerDied","Data":"0209add398700228e0fcc883ac99d37768a000d7cf9532764ef3bc88a5c87df2"} Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.389428 4808 scope.go:117] "RemoveContainer" containerID="5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a" Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.392911 4808 generic.go:334] "Generic (PLEG): container finished" podID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerID="358da34cb13e59b5b2eea0ee50c08c53ee1042a95c8b7f0a5110b7c72d5bc6f1" exitCode=0 Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.392944 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" event={"ID":"df1cf40f-e7a2-40b1-8adb-45d2b5205584","Type":"ContainerDied","Data":"358da34cb13e59b5b2eea0ee50c08c53ee1042a95c8b7f0a5110b7c72d5bc6f1"} Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.392967 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" event={"ID":"df1cf40f-e7a2-40b1-8adb-45d2b5205584","Type":"ContainerStarted","Data":"c907e069585e21057ee27ea3d446789d6b432b4c4f506cfa3b13885254560849"} Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.406297 4808 scope.go:117] "RemoveContainer" containerID="5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a" Feb 17 16:08:58 crc kubenswrapper[4808]: E0217 16:08:58.406839 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a\": container with ID starting with 5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a not found: ID does not exist" containerID="5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a" Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.406886 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a"} err="failed to get container status \"5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a\": rpc error: code = NotFound desc = could not find container \"5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a\": container with ID starting with 5fa014756fd5fd80eb6b1fdbbf3d68e06eb937cbb5c5ef91970212b3ef06613a not found: ID does not exist" Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.449110 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-hdg74"] Feb 17 16:08:58 crc kubenswrapper[4808]: I0217 16:08:58.453902 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-hdg74"] Feb 17 16:08:59 crc kubenswrapper[4808]: I0217 16:08:59.157287 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e489a46b-9123-44c6-94e0-692621760dd6" path="/var/lib/kubelet/pods/e489a46b-9123-44c6-94e0-692621760dd6/volumes" Feb 17 16:09:00 crc kubenswrapper[4808]: I0217 16:09:00.412424 4808 generic.go:334] "Generic (PLEG): container finished" podID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerID="4aee38a5e736b972166cae78dfa52d80c5ca2e3b48fff8ca6436228cc635549b" exitCode=0 Feb 17 16:09:00 crc kubenswrapper[4808]: I0217 16:09:00.412478 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" event={"ID":"df1cf40f-e7a2-40b1-8adb-45d2b5205584","Type":"ContainerDied","Data":"4aee38a5e736b972166cae78dfa52d80c5ca2e3b48fff8ca6436228cc635549b"} Feb 17 16:09:01 crc kubenswrapper[4808]: I0217 16:09:01.420814 4808 generic.go:334] "Generic (PLEG): container finished" podID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerID="151b383ff6053e07c90cf0ea55e4844dc94db57808fcbc6f44f253fc98c01395" exitCode=0 Feb 17 16:09:01 crc kubenswrapper[4808]: I0217 16:09:01.420954 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" event={"ID":"df1cf40f-e7a2-40b1-8adb-45d2b5205584","Type":"ContainerDied","Data":"151b383ff6053e07c90cf0ea55e4844dc94db57808fcbc6f44f253fc98c01395"} Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.662480 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.728518 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-util\") pod \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.728734 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-bundle\") pod \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.728839 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vz4b\" (UniqueName: \"kubernetes.io/projected/df1cf40f-e7a2-40b1-8adb-45d2b5205584-kube-api-access-8vz4b\") pod \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\" (UID: \"df1cf40f-e7a2-40b1-8adb-45d2b5205584\") " Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.729846 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-bundle" (OuterVolumeSpecName: "bundle") pod "df1cf40f-e7a2-40b1-8adb-45d2b5205584" (UID: "df1cf40f-e7a2-40b1-8adb-45d2b5205584"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.737339 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1cf40f-e7a2-40b1-8adb-45d2b5205584-kube-api-access-8vz4b" (OuterVolumeSpecName: "kube-api-access-8vz4b") pod "df1cf40f-e7a2-40b1-8adb-45d2b5205584" (UID: "df1cf40f-e7a2-40b1-8adb-45d2b5205584"). InnerVolumeSpecName "kube-api-access-8vz4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.742477 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-util" (OuterVolumeSpecName: "util") pod "df1cf40f-e7a2-40b1-8adb-45d2b5205584" (UID: "df1cf40f-e7a2-40b1-8adb-45d2b5205584"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.830521 4808 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.830570 4808 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df1cf40f-e7a2-40b1-8adb-45d2b5205584-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:02 crc kubenswrapper[4808]: I0217 16:09:02.830598 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vz4b\" (UniqueName: \"kubernetes.io/projected/df1cf40f-e7a2-40b1-8adb-45d2b5205584-kube-api-access-8vz4b\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:03 crc kubenswrapper[4808]: I0217 16:09:03.432951 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" event={"ID":"df1cf40f-e7a2-40b1-8adb-45d2b5205584","Type":"ContainerDied","Data":"c907e069585e21057ee27ea3d446789d6b432b4c4f506cfa3b13885254560849"} Feb 17 16:09:03 crc kubenswrapper[4808]: I0217 16:09:03.433006 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c907e069585e21057ee27ea3d446789d6b432b4c4f506cfa3b13885254560849" Feb 17 16:09:03 crc kubenswrapper[4808]: I0217 16:09:03.433093 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.211647 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6655d59788-74j79"] Feb 17 16:09:12 crc kubenswrapper[4808]: E0217 16:09:12.212447 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerName="pull" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.212464 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerName="pull" Feb 17 16:09:12 crc kubenswrapper[4808]: E0217 16:09:12.212484 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerName="extract" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.212492 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerName="extract" Feb 17 16:09:12 crc kubenswrapper[4808]: E0217 16:09:12.212501 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e489a46b-9123-44c6-94e0-692621760dd6" containerName="console" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.212508 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e489a46b-9123-44c6-94e0-692621760dd6" containerName="console" Feb 17 16:09:12 crc kubenswrapper[4808]: E0217 16:09:12.212525 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerName="util" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.212534 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerName="util" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.212675 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1cf40f-e7a2-40b1-8adb-45d2b5205584" containerName="extract" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.212693 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e489a46b-9123-44c6-94e0-692621760dd6" containerName="console" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.213200 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.215493 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.215542 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.215700 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-jfr66" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.218761 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.218847 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.223640 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6655d59788-74j79"] Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.369645 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxrcp\" (UniqueName: \"kubernetes.io/projected/d90f3d87-35f4-4c7d-b157-424ee7b502cd-kube-api-access-fxrcp\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.369785 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d90f3d87-35f4-4c7d-b157-424ee7b502cd-webhook-cert\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.369863 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d90f3d87-35f4-4c7d-b157-424ee7b502cd-apiservice-cert\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.471521 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d90f3d87-35f4-4c7d-b157-424ee7b502cd-webhook-cert\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.471652 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d90f3d87-35f4-4c7d-b157-424ee7b502cd-apiservice-cert\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.471705 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxrcp\" (UniqueName: \"kubernetes.io/projected/d90f3d87-35f4-4c7d-b157-424ee7b502cd-kube-api-access-fxrcp\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.483440 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d90f3d87-35f4-4c7d-b157-424ee7b502cd-apiservice-cert\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.490365 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d90f3d87-35f4-4c7d-b157-424ee7b502cd-webhook-cert\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.495339 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxrcp\" (UniqueName: \"kubernetes.io/projected/d90f3d87-35f4-4c7d-b157-424ee7b502cd-kube-api-access-fxrcp\") pod \"metallb-operator-controller-manager-6655d59788-74j79\" (UID: \"d90f3d87-35f4-4c7d-b157-424ee7b502cd\") " pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.527847 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.533965 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5"] Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.535991 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.539614 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.540013 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.540215 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-48gnn" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.575068 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5"] Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.674367 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jgrz\" (UniqueName: \"kubernetes.io/projected/6de38240-7d75-47a0-b5c1-788f619bb8ff-kube-api-access-4jgrz\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.674957 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6de38240-7d75-47a0-b5c1-788f619bb8ff-apiservice-cert\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.675026 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6de38240-7d75-47a0-b5c1-788f619bb8ff-webhook-cert\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.776482 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jgrz\" (UniqueName: \"kubernetes.io/projected/6de38240-7d75-47a0-b5c1-788f619bb8ff-kube-api-access-4jgrz\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.776553 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6de38240-7d75-47a0-b5c1-788f619bb8ff-apiservice-cert\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.776633 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6de38240-7d75-47a0-b5c1-788f619bb8ff-webhook-cert\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.785288 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6de38240-7d75-47a0-b5c1-788f619bb8ff-apiservice-cert\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.790333 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6de38240-7d75-47a0-b5c1-788f619bb8ff-webhook-cert\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.798467 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jgrz\" (UniqueName: \"kubernetes.io/projected/6de38240-7d75-47a0-b5c1-788f619bb8ff-kube-api-access-4jgrz\") pod \"metallb-operator-webhook-server-5f74458966-dhjp5\" (UID: \"6de38240-7d75-47a0-b5c1-788f619bb8ff\") " pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:12 crc kubenswrapper[4808]: I0217 16:09:12.893554 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:13 crc kubenswrapper[4808]: I0217 16:09:13.008498 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6655d59788-74j79"] Feb 17 16:09:13 crc kubenswrapper[4808]: I0217 16:09:13.346300 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5"] Feb 17 16:09:13 crc kubenswrapper[4808]: W0217 16:09:13.349622 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6de38240_7d75_47a0_b5c1_788f619bb8ff.slice/crio-d39cfb10b69da9281fec7f30ff037c622a952c51ba336001fe03a8e0cb197f3b WatchSource:0}: Error finding container d39cfb10b69da9281fec7f30ff037c622a952c51ba336001fe03a8e0cb197f3b: Status 404 returned error can't find the container with id d39cfb10b69da9281fec7f30ff037c622a952c51ba336001fe03a8e0cb197f3b Feb 17 16:09:13 crc kubenswrapper[4808]: I0217 16:09:13.509124 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" event={"ID":"6de38240-7d75-47a0-b5c1-788f619bb8ff","Type":"ContainerStarted","Data":"d39cfb10b69da9281fec7f30ff037c622a952c51ba336001fe03a8e0cb197f3b"} Feb 17 16:09:13 crc kubenswrapper[4808]: I0217 16:09:13.510393 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" event={"ID":"d90f3d87-35f4-4c7d-b157-424ee7b502cd","Type":"ContainerStarted","Data":"4478b218d0ad7e3093f89a991338f003131248ba05caf80774289a5a0217225e"} Feb 17 16:09:16 crc kubenswrapper[4808]: I0217 16:09:16.529740 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" event={"ID":"d90f3d87-35f4-4c7d-b157-424ee7b502cd","Type":"ContainerStarted","Data":"cac9827299cc1e54eb326fa720d07b869e929b2649828ceb750b4ea695a830c2"} Feb 17 16:09:16 crc kubenswrapper[4808]: I0217 16:09:16.530072 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:16 crc kubenswrapper[4808]: I0217 16:09:16.556999 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" podStartSLOduration=1.405463738 podStartE2EDuration="4.556974183s" podCreationTimestamp="2026-02-17 16:09:12 +0000 UTC" firstStartedPulling="2026-02-17 16:09:13.033286558 +0000 UTC m=+916.549645631" lastFinishedPulling="2026-02-17 16:09:16.184797003 +0000 UTC m=+919.701156076" observedRunningTime="2026-02-17 16:09:16.547513649 +0000 UTC m=+920.063872722" watchObservedRunningTime="2026-02-17 16:09:16.556974183 +0000 UTC m=+920.073333256" Feb 17 16:09:19 crc kubenswrapper[4808]: I0217 16:09:19.549538 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" event={"ID":"6de38240-7d75-47a0-b5c1-788f619bb8ff","Type":"ContainerStarted","Data":"dae0476d672b70da7819e921ccf8551ca4ce92513ca4582e28bfb122e5e57564"} Feb 17 16:09:19 crc kubenswrapper[4808]: I0217 16:09:19.549931 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:19 crc kubenswrapper[4808]: I0217 16:09:19.571112 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" podStartSLOduration=2.460133964 podStartE2EDuration="7.571094934s" podCreationTimestamp="2026-02-17 16:09:12 +0000 UTC" firstStartedPulling="2026-02-17 16:09:13.353489299 +0000 UTC m=+916.869848372" lastFinishedPulling="2026-02-17 16:09:18.464450269 +0000 UTC m=+921.980809342" observedRunningTime="2026-02-17 16:09:19.569032979 +0000 UTC m=+923.085392052" watchObservedRunningTime="2026-02-17 16:09:19.571094934 +0000 UTC m=+923.087454007" Feb 17 16:09:21 crc kubenswrapper[4808]: I0217 16:09:21.592040 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:09:21 crc kubenswrapper[4808]: I0217 16:09:21.592309 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:09:32 crc kubenswrapper[4808]: I0217 16:09:32.902993 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5f74458966-dhjp5" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.118266 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pgghj"] Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.121986 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.131057 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pgghj"] Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.206418 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0c9cdb-4343-4e20-b099-0f1d04243839-utilities\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.206890 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcqrp\" (UniqueName: \"kubernetes.io/projected/7b0c9cdb-4343-4e20-b099-0f1d04243839-kube-api-access-fcqrp\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.206972 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0c9cdb-4343-4e20-b099-0f1d04243839-catalog-content\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.308808 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcqrp\" (UniqueName: \"kubernetes.io/projected/7b0c9cdb-4343-4e20-b099-0f1d04243839-kube-api-access-fcqrp\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.308864 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0c9cdb-4343-4e20-b099-0f1d04243839-catalog-content\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.308904 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0c9cdb-4343-4e20-b099-0f1d04243839-utilities\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.309500 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0c9cdb-4343-4e20-b099-0f1d04243839-utilities\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.309539 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0c9cdb-4343-4e20-b099-0f1d04243839-catalog-content\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.332865 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcqrp\" (UniqueName: \"kubernetes.io/projected/7b0c9cdb-4343-4e20-b099-0f1d04243839-kube-api-access-fcqrp\") pod \"certified-operators-pgghj\" (UID: \"7b0c9cdb-4343-4e20-b099-0f1d04243839\") " pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.460363 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:36 crc kubenswrapper[4808]: I0217 16:09:36.925321 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pgghj"] Feb 17 16:09:36 crc kubenswrapper[4808]: W0217 16:09:36.931191 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b0c9cdb_4343_4e20_b099_0f1d04243839.slice/crio-16e7d45fd7584c772b1e16c27a75902a934207aef996ae70892ce0bff673d42e WatchSource:0}: Error finding container 16e7d45fd7584c772b1e16c27a75902a934207aef996ae70892ce0bff673d42e: Status 404 returned error can't find the container with id 16e7d45fd7584c772b1e16c27a75902a934207aef996ae70892ce0bff673d42e Feb 17 16:09:37 crc kubenswrapper[4808]: I0217 16:09:37.695953 4808 generic.go:334] "Generic (PLEG): container finished" podID="7b0c9cdb-4343-4e20-b099-0f1d04243839" containerID="6ab42f795e41b23d736cc04bc17597fc7e2831ad620d3bee81bc62edb18ba793" exitCode=0 Feb 17 16:09:37 crc kubenswrapper[4808]: I0217 16:09:37.696014 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgghj" event={"ID":"7b0c9cdb-4343-4e20-b099-0f1d04243839","Type":"ContainerDied","Data":"6ab42f795e41b23d736cc04bc17597fc7e2831ad620d3bee81bc62edb18ba793"} Feb 17 16:09:37 crc kubenswrapper[4808]: I0217 16:09:37.696438 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgghj" event={"ID":"7b0c9cdb-4343-4e20-b099-0f1d04243839","Type":"ContainerStarted","Data":"16e7d45fd7584c772b1e16c27a75902a934207aef996ae70892ce0bff673d42e"} Feb 17 16:09:43 crc kubenswrapper[4808]: I0217 16:09:43.747317 4808 generic.go:334] "Generic (PLEG): container finished" podID="7b0c9cdb-4343-4e20-b099-0f1d04243839" containerID="cbe1f42567236516ed0b2046eb2e20a27d612bd9627db97fd1dd6d4521f09c3b" exitCode=0 Feb 17 16:09:43 crc kubenswrapper[4808]: I0217 16:09:43.747381 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgghj" event={"ID":"7b0c9cdb-4343-4e20-b099-0f1d04243839","Type":"ContainerDied","Data":"cbe1f42567236516ed0b2046eb2e20a27d612bd9627db97fd1dd6d4521f09c3b"} Feb 17 16:09:44 crc kubenswrapper[4808]: I0217 16:09:44.757794 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pgghj" event={"ID":"7b0c9cdb-4343-4e20-b099-0f1d04243839","Type":"ContainerStarted","Data":"fa6447c11c7841f9d4b11d7f9dd185523aa37804b33f9cdaa631b5ae0354a92c"} Feb 17 16:09:44 crc kubenswrapper[4808]: I0217 16:09:44.781761 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pgghj" podStartSLOduration=2.352741556 podStartE2EDuration="8.781738103s" podCreationTimestamp="2026-02-17 16:09:36 +0000 UTC" firstStartedPulling="2026-02-17 16:09:37.697456607 +0000 UTC m=+941.213815680" lastFinishedPulling="2026-02-17 16:09:44.126453154 +0000 UTC m=+947.642812227" observedRunningTime="2026-02-17 16:09:44.777769886 +0000 UTC m=+948.294128979" watchObservedRunningTime="2026-02-17 16:09:44.781738103 +0000 UTC m=+948.298097176" Feb 17 16:09:46 crc kubenswrapper[4808]: I0217 16:09:46.460493 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:46 crc kubenswrapper[4808]: I0217 16:09:46.460914 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:46 crc kubenswrapper[4808]: I0217 16:09:46.545376 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.592023 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.592421 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.592482 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.593242 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"284430f1fb330ef6ae53b6d6dd49c2af767ae61ae02d682d5cba6dbd7c4ce02d"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.593318 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://284430f1fb330ef6ae53b6d6dd49c2af767ae61ae02d682d5cba6dbd7c4ce02d" gracePeriod=600 Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.811460 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="284430f1fb330ef6ae53b6d6dd49c2af767ae61ae02d682d5cba6dbd7c4ce02d" exitCode=0 Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.811531 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"284430f1fb330ef6ae53b6d6dd49c2af767ae61ae02d682d5cba6dbd7c4ce02d"} Feb 17 16:09:51 crc kubenswrapper[4808]: I0217 16:09:51.811857 4808 scope.go:117] "RemoveContainer" containerID="51dff3d704e9a98a9fc5f37394f1d0157cc8cebcc4571b1aa78c7b9262eeb36c" Feb 17 16:09:52 crc kubenswrapper[4808]: I0217 16:09:52.531223 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6655d59788-74j79" Feb 17 16:09:52 crc kubenswrapper[4808]: I0217 16:09:52.819416 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"12b4e957316b11ee081f9acecacedfdbabeee0248dc83ade7fe5f8b084a798ba"} Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.332431 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-c58vl"] Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.336089 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.339133 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.339490 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.351176 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84"] Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.354158 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.354761 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vpq7s" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.357405 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.374467 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84"] Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.451219 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2hrgh"] Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.452432 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.456304 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.456470 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.456473 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.456661 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-mlfcz" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.459134 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-metrics\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.459535 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpk8m\" (UniqueName: \"kubernetes.io/projected/42711d14-278f-41eb-80ce-2e67add356b9-kube-api-access-tpk8m\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.459669 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r22qn\" (UniqueName: \"kubernetes.io/projected/b55883d0-d8e0-4609-8b1a-033d6808ab56-kube-api-access-r22qn\") pod \"frr-k8s-webhook-server-78b44bf5bb-zvr84\" (UID: \"b55883d0-d8e0-4609-8b1a-033d6808ab56\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.459755 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-frr-conf\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.459854 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-reloader\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.459927 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b55883d0-d8e0-4609-8b1a-033d6808ab56-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-zvr84\" (UID: \"b55883d0-d8e0-4609-8b1a-033d6808ab56\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.460004 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42711d14-278f-41eb-80ce-2e67add356b9-metrics-certs\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.460075 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/42711d14-278f-41eb-80ce-2e67add356b9-frr-startup\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.460155 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-frr-sockets\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.495032 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-jvlrt"] Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.498670 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.501208 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.507413 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-jvlrt"] Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561374 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metallb-excludel2\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561492 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-metrics\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561516 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpk8m\" (UniqueName: \"kubernetes.io/projected/42711d14-278f-41eb-80ce-2e67add356b9-kube-api-access-tpk8m\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561538 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r22qn\" (UniqueName: \"kubernetes.io/projected/b55883d0-d8e0-4609-8b1a-033d6808ab56-kube-api-access-r22qn\") pod \"frr-k8s-webhook-server-78b44bf5bb-zvr84\" (UID: \"b55883d0-d8e0-4609-8b1a-033d6808ab56\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561558 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-frr-conf\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561701 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metrics-certs\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561774 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-metrics-certs\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561804 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561822 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-cert\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561839 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk4hm\" (UniqueName: \"kubernetes.io/projected/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-kube-api-access-hk4hm\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561857 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lzbh\" (UniqueName: \"kubernetes.io/projected/86420ee7-2594-4ef8-8b9d-05a073118389-kube-api-access-8lzbh\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561874 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-reloader\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561894 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b55883d0-d8e0-4609-8b1a-033d6808ab56-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-zvr84\" (UID: \"b55883d0-d8e0-4609-8b1a-033d6808ab56\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561917 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42711d14-278f-41eb-80ce-2e67add356b9-metrics-certs\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561935 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/42711d14-278f-41eb-80ce-2e67add356b9-frr-startup\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.561955 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-frr-sockets\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.562371 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-frr-sockets\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.562864 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-reloader\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.563051 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-frr-conf\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.563688 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/42711d14-278f-41eb-80ce-2e67add356b9-frr-startup\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.563882 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/42711d14-278f-41eb-80ce-2e67add356b9-metrics\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.567824 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42711d14-278f-41eb-80ce-2e67add356b9-metrics-certs\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.576724 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpk8m\" (UniqueName: \"kubernetes.io/projected/42711d14-278f-41eb-80ce-2e67add356b9-kube-api-access-tpk8m\") pod \"frr-k8s-c58vl\" (UID: \"42711d14-278f-41eb-80ce-2e67add356b9\") " pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.577336 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b55883d0-d8e0-4609-8b1a-033d6808ab56-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-zvr84\" (UID: \"b55883d0-d8e0-4609-8b1a-033d6808ab56\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.580671 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r22qn\" (UniqueName: \"kubernetes.io/projected/b55883d0-d8e0-4609-8b1a-033d6808ab56-kube-api-access-r22qn\") pod \"frr-k8s-webhook-server-78b44bf5bb-zvr84\" (UID: \"b55883d0-d8e0-4609-8b1a-033d6808ab56\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.662646 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk4hm\" (UniqueName: \"kubernetes.io/projected/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-kube-api-access-hk4hm\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.662702 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lzbh\" (UniqueName: \"kubernetes.io/projected/86420ee7-2594-4ef8-8b9d-05a073118389-kube-api-access-8lzbh\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.662758 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metallb-excludel2\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.662804 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metrics-certs\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.662829 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-metrics-certs\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.662850 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.662867 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-cert\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: E0217 16:09:53.663416 4808 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 17 16:09:53 crc kubenswrapper[4808]: E0217 16:09:53.663470 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-metrics-certs podName:86420ee7-2594-4ef8-8b9d-05a073118389 nodeName:}" failed. No retries permitted until 2026-02-17 16:09:54.163453681 +0000 UTC m=+957.679812754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-metrics-certs") pod "controller-69bbfbf88f-jvlrt" (UID: "86420ee7-2594-4ef8-8b9d-05a073118389") : secret "controller-certs-secret" not found Feb 17 16:09:53 crc kubenswrapper[4808]: E0217 16:09:53.663600 4808 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 17 16:09:53 crc kubenswrapper[4808]: E0217 16:09:53.663636 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metrics-certs podName:c8e5bfe8-d4de-4863-b830-db146a4f0bd8 nodeName:}" failed. No retries permitted until 2026-02-17 16:09:54.163627146 +0000 UTC m=+957.679986219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metrics-certs") pod "speaker-2hrgh" (UID: "c8e5bfe8-d4de-4863-b830-db146a4f0bd8") : secret "speaker-certs-secret" not found Feb 17 16:09:53 crc kubenswrapper[4808]: E0217 16:09:53.663695 4808 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 16:09:53 crc kubenswrapper[4808]: E0217 16:09:53.663719 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist podName:c8e5bfe8-d4de-4863-b830-db146a4f0bd8 nodeName:}" failed. No retries permitted until 2026-02-17 16:09:54.163712008 +0000 UTC m=+957.680071081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist") pod "speaker-2hrgh" (UID: "c8e5bfe8-d4de-4863-b830-db146a4f0bd8") : secret "metallb-memberlist" not found Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.664147 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metallb-excludel2\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.665858 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-cert\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.668944 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-c58vl" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.686068 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.687385 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk4hm\" (UniqueName: \"kubernetes.io/projected/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-kube-api-access-hk4hm\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.690327 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lzbh\" (UniqueName: \"kubernetes.io/projected/86420ee7-2594-4ef8-8b9d-05a073118389-kube-api-access-8lzbh\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:53 crc kubenswrapper[4808]: I0217 16:09:53.828260 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerStarted","Data":"c3c771a49af0bcbd3469553c9741cea6dc96fd7ff92fccbd9ecc8bccb1075e16"} Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.130137 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84"] Feb 17 16:09:54 crc kubenswrapper[4808]: W0217 16:09:54.137792 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb55883d0_d8e0_4609_8b1a_033d6808ab56.slice/crio-b11c0b1c79ac784de52e2ec6f226913c0c5e08fb25f9f8efceeabd92dfa6feac WatchSource:0}: Error finding container b11c0b1c79ac784de52e2ec6f226913c0c5e08fb25f9f8efceeabd92dfa6feac: Status 404 returned error can't find the container with id b11c0b1c79ac784de52e2ec6f226913c0c5e08fb25f9f8efceeabd92dfa6feac Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.171985 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metrics-certs\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.172105 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-metrics-certs\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.172198 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:54 crc kubenswrapper[4808]: E0217 16:09:54.172528 4808 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 16:09:54 crc kubenswrapper[4808]: E0217 16:09:54.172709 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist podName:c8e5bfe8-d4de-4863-b830-db146a4f0bd8 nodeName:}" failed. No retries permitted until 2026-02-17 16:09:55.172675725 +0000 UTC m=+958.689034978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist") pod "speaker-2hrgh" (UID: "c8e5bfe8-d4de-4863-b830-db146a4f0bd8") : secret "metallb-memberlist" not found Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.178163 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-metrics-certs\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.178266 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/86420ee7-2594-4ef8-8b9d-05a073118389-metrics-certs\") pod \"controller-69bbfbf88f-jvlrt\" (UID: \"86420ee7-2594-4ef8-8b9d-05a073118389\") " pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.420364 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.836104 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" event={"ID":"b55883d0-d8e0-4609-8b1a-033d6808ab56","Type":"ContainerStarted","Data":"b11c0b1c79ac784de52e2ec6f226913c0c5e08fb25f9f8efceeabd92dfa6feac"} Feb 17 16:09:54 crc kubenswrapper[4808]: I0217 16:09:54.920092 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-jvlrt"] Feb 17 16:09:55 crc kubenswrapper[4808]: I0217 16:09:55.187283 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:55 crc kubenswrapper[4808]: E0217 16:09:55.187466 4808 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 17 16:09:55 crc kubenswrapper[4808]: E0217 16:09:55.187549 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist podName:c8e5bfe8-d4de-4863-b830-db146a4f0bd8 nodeName:}" failed. No retries permitted until 2026-02-17 16:09:57.187532037 +0000 UTC m=+960.703891110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist") pod "speaker-2hrgh" (UID: "c8e5bfe8-d4de-4863-b830-db146a4f0bd8") : secret "metallb-memberlist" not found Feb 17 16:09:55 crc kubenswrapper[4808]: I0217 16:09:55.848545 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jvlrt" event={"ID":"86420ee7-2594-4ef8-8b9d-05a073118389","Type":"ContainerStarted","Data":"7c12a784d887fa8d0736db135fac58f27ed0e52fd1b88b44692c071f55a837b5"} Feb 17 16:09:55 crc kubenswrapper[4808]: I0217 16:09:55.848864 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jvlrt" event={"ID":"86420ee7-2594-4ef8-8b9d-05a073118389","Type":"ContainerStarted","Data":"de6d0c78fbe7f4242a4e07f1cbab2a12bf2a822ee3675c882bd8464fc6e2384b"} Feb 17 16:09:55 crc kubenswrapper[4808]: I0217 16:09:55.848876 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-jvlrt" event={"ID":"86420ee7-2594-4ef8-8b9d-05a073118389","Type":"ContainerStarted","Data":"2e3c264fbc1a73ebb1149c5181116f75cc5e2d92265d82d4dc9d6b03e1cdcd72"} Feb 17 16:09:55 crc kubenswrapper[4808]: I0217 16:09:55.848889 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:09:55 crc kubenswrapper[4808]: I0217 16:09:55.872114 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-jvlrt" podStartSLOduration=2.872071908 podStartE2EDuration="2.872071908s" podCreationTimestamp="2026-02-17 16:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:09:55.869185731 +0000 UTC m=+959.385544804" watchObservedRunningTime="2026-02-17 16:09:55.872071908 +0000 UTC m=+959.388430981" Feb 17 16:09:56 crc kubenswrapper[4808]: I0217 16:09:56.535246 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pgghj" Feb 17 16:09:56 crc kubenswrapper[4808]: I0217 16:09:56.642120 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pgghj"] Feb 17 16:09:56 crc kubenswrapper[4808]: I0217 16:09:56.696422 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqtsg"] Feb 17 16:09:56 crc kubenswrapper[4808]: I0217 16:09:56.697006 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jqtsg" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="registry-server" containerID="cri-o://2d9bae86441156ea0978a61aa55e3e05d2e584ec61842c859e61158d7e3209d1" gracePeriod=2 Feb 17 16:09:56 crc kubenswrapper[4808]: I0217 16:09:56.858438 4808 generic.go:334] "Generic (PLEG): container finished" podID="7cdb188e-770b-4b77-8396-a2422be880a4" containerID="2d9bae86441156ea0978a61aa55e3e05d2e584ec61842c859e61158d7e3209d1" exitCode=0 Feb 17 16:09:56 crc kubenswrapper[4808]: I0217 16:09:56.858537 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerDied","Data":"2d9bae86441156ea0978a61aa55e3e05d2e584ec61842c859e61158d7e3209d1"} Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.213257 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.226408 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.232877 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/c8e5bfe8-d4de-4863-b830-db146a4f0bd8-memberlist\") pod \"speaker-2hrgh\" (UID: \"c8e5bfe8-d4de-4863-b830-db146a4f0bd8\") " pod="metallb-system/speaker-2hrgh" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.328241 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-utilities\") pod \"7cdb188e-770b-4b77-8396-a2422be880a4\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.328346 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmplc\" (UniqueName: \"kubernetes.io/projected/7cdb188e-770b-4b77-8396-a2422be880a4-kube-api-access-gmplc\") pod \"7cdb188e-770b-4b77-8396-a2422be880a4\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.328376 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-catalog-content\") pod \"7cdb188e-770b-4b77-8396-a2422be880a4\" (UID: \"7cdb188e-770b-4b77-8396-a2422be880a4\") " Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.330087 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-utilities" (OuterVolumeSpecName: "utilities") pod "7cdb188e-770b-4b77-8396-a2422be880a4" (UID: "7cdb188e-770b-4b77-8396-a2422be880a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.341867 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cdb188e-770b-4b77-8396-a2422be880a4-kube-api-access-gmplc" (OuterVolumeSpecName: "kube-api-access-gmplc") pod "7cdb188e-770b-4b77-8396-a2422be880a4" (UID: "7cdb188e-770b-4b77-8396-a2422be880a4"). InnerVolumeSpecName "kube-api-access-gmplc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.374163 4808 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-mlfcz" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.388821 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2hrgh" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.392332 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7cdb188e-770b-4b77-8396-a2422be880a4" (UID: "7cdb188e-770b-4b77-8396-a2422be880a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:09:57 crc kubenswrapper[4808]: W0217 16:09:57.420879 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8e5bfe8_d4de_4863_b830_db146a4f0bd8.slice/crio-9dc08dcc0c5641f62390b9bcd9f1ec1ac1aac7a5024ae461de08318c227a34e1 WatchSource:0}: Error finding container 9dc08dcc0c5641f62390b9bcd9f1ec1ac1aac7a5024ae461de08318c227a34e1: Status 404 returned error can't find the container with id 9dc08dcc0c5641f62390b9bcd9f1ec1ac1aac7a5024ae461de08318c227a34e1 Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.429776 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.429809 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmplc\" (UniqueName: \"kubernetes.io/projected/7cdb188e-770b-4b77-8396-a2422be880a4-kube-api-access-gmplc\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.429821 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7cdb188e-770b-4b77-8396-a2422be880a4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.673815 4808 scope.go:117] "RemoveContainer" containerID="2d9bae86441156ea0978a61aa55e3e05d2e584ec61842c859e61158d7e3209d1" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.696019 4808 scope.go:117] "RemoveContainer" containerID="90673874b32c0b13b6c696df3d7ec418349328c7a6d184134dcf0c00617dcaee" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.713477 4808 scope.go:117] "RemoveContainer" containerID="47a3ebdb89ce68c6b02152046e0104b05bde9ba746322e9e754da8447f0e2b5b" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.866745 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqtsg" Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.868714 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2hrgh" event={"ID":"c8e5bfe8-d4de-4863-b830-db146a4f0bd8","Type":"ContainerStarted","Data":"9dc08dcc0c5641f62390b9bcd9f1ec1ac1aac7a5024ae461de08318c227a34e1"} Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.868769 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqtsg" event={"ID":"7cdb188e-770b-4b77-8396-a2422be880a4","Type":"ContainerDied","Data":"ef844668f5d5756ff7b1ef705f4ea124e4d7a7bd509d8e67479cb418a27a08a4"} Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.912959 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqtsg"] Feb 17 16:09:57 crc kubenswrapper[4808]: I0217 16:09:57.918108 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jqtsg"] Feb 17 16:09:58 crc kubenswrapper[4808]: I0217 16:09:58.876536 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2hrgh" event={"ID":"c8e5bfe8-d4de-4863-b830-db146a4f0bd8","Type":"ContainerStarted","Data":"22f85ac8d5c3800c41c82c02ba4371d06f3f484ada503e831c8b840c81e7a06c"} Feb 17 16:09:59 crc kubenswrapper[4808]: I0217 16:09:59.155278 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" path="/var/lib/kubelet/pods/7cdb188e-770b-4b77-8396-a2422be880a4/volumes" Feb 17 16:09:59 crc kubenswrapper[4808]: I0217 16:09:59.889143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2hrgh" event={"ID":"c8e5bfe8-d4de-4863-b830-db146a4f0bd8","Type":"ContainerStarted","Data":"38dd71844541127279da98403b0903521a13b00e192825a9d7e29548457789ba"} Feb 17 16:09:59 crc kubenswrapper[4808]: I0217 16:09:59.889391 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2hrgh" Feb 17 16:09:59 crc kubenswrapper[4808]: I0217 16:09:59.913553 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2hrgh" podStartSLOduration=6.913533755 podStartE2EDuration="6.913533755s" podCreationTimestamp="2026-02-17 16:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:09:59.907987886 +0000 UTC m=+963.424346959" watchObservedRunningTime="2026-02-17 16:09:59.913533755 +0000 UTC m=+963.429892818" Feb 17 16:10:03 crc kubenswrapper[4808]: I0217 16:10:03.917786 4808 generic.go:334] "Generic (PLEG): container finished" podID="42711d14-278f-41eb-80ce-2e67add356b9" containerID="100e4ef4b2f2ab83c6d70346f4353427fef8930a51342fb983dcc3630a173e9e" exitCode=0 Feb 17 16:10:03 crc kubenswrapper[4808]: I0217 16:10:03.917923 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerDied","Data":"100e4ef4b2f2ab83c6d70346f4353427fef8930a51342fb983dcc3630a173e9e"} Feb 17 16:10:03 crc kubenswrapper[4808]: I0217 16:10:03.922562 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" event={"ID":"b55883d0-d8e0-4609-8b1a-033d6808ab56","Type":"ContainerStarted","Data":"62a40b9d296b95dcdf2a1c11152b1ea4cb0672bb35ba8e4b44359b3d966e54d1"} Feb 17 16:10:03 crc kubenswrapper[4808]: I0217 16:10:03.922763 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:10:03 crc kubenswrapper[4808]: I0217 16:10:03.954906 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" podStartSLOduration=2.271503333 podStartE2EDuration="10.954889689s" podCreationTimestamp="2026-02-17 16:09:53 +0000 UTC" firstStartedPulling="2026-02-17 16:09:54.140315961 +0000 UTC m=+957.656675034" lastFinishedPulling="2026-02-17 16:10:02.823702277 +0000 UTC m=+966.340061390" observedRunningTime="2026-02-17 16:10:03.952124436 +0000 UTC m=+967.468483509" watchObservedRunningTime="2026-02-17 16:10:03.954889689 +0000 UTC m=+967.471248762" Feb 17 16:10:04 crc kubenswrapper[4808]: I0217 16:10:04.930013 4808 generic.go:334] "Generic (PLEG): container finished" podID="42711d14-278f-41eb-80ce-2e67add356b9" containerID="fbf44e61aabf63de03154baaba818c6e4afefb871dc6642842828d5e075d169d" exitCode=0 Feb 17 16:10:04 crc kubenswrapper[4808]: I0217 16:10:04.930179 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerDied","Data":"fbf44e61aabf63de03154baaba818c6e4afefb871dc6642842828d5e075d169d"} Feb 17 16:10:05 crc kubenswrapper[4808]: I0217 16:10:05.942733 4808 generic.go:334] "Generic (PLEG): container finished" podID="42711d14-278f-41eb-80ce-2e67add356b9" containerID="f6095819d9cf06e5da0bac1456811b9d743389d3b95aba5c0568a280f9a26e65" exitCode=0 Feb 17 16:10:05 crc kubenswrapper[4808]: I0217 16:10:05.942838 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerDied","Data":"f6095819d9cf06e5da0bac1456811b9d743389d3b95aba5c0568a280f9a26e65"} Feb 17 16:10:06 crc kubenswrapper[4808]: I0217 16:10:06.979387 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerStarted","Data":"e8624e2c142931f20e19390ce6be8cc6d6f8c6116d64fcd2ec7b2085945fd8a3"} Feb 17 16:10:06 crc kubenswrapper[4808]: I0217 16:10:06.979715 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerStarted","Data":"5bb9daff2c4f52b8d2b423730b2cb8deebab166cf2cd799d545d3ef0a857b2cd"} Feb 17 16:10:06 crc kubenswrapper[4808]: I0217 16:10:06.979731 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerStarted","Data":"cde5c2c2753bf8283f3f7824ac8948dc7bac72507eb30e1ea30820438d3e8b29"} Feb 17 16:10:06 crc kubenswrapper[4808]: I0217 16:10:06.979741 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerStarted","Data":"4d62b7eea66b4c3e49c633262db57a4de4fb4268d5877d855b89e0dd26877731"} Feb 17 16:10:06 crc kubenswrapper[4808]: I0217 16:10:06.979751 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerStarted","Data":"6646c5670cb8cc216a5fff5945b2086ec4bad170625cf3909b865cc07cca6080"} Feb 17 16:10:07 crc kubenswrapper[4808]: I0217 16:10:07.997320 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-c58vl" event={"ID":"42711d14-278f-41eb-80ce-2e67add356b9","Type":"ContainerStarted","Data":"911eb6d8e00e4ee6440bab53d779a0cfa05bcb524777535451c47c556dd43f06"} Feb 17 16:10:07 crc kubenswrapper[4808]: I0217 16:10:07.997627 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-c58vl" Feb 17 16:10:08 crc kubenswrapper[4808]: I0217 16:10:08.046866 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-c58vl" podStartSLOduration=6.026997754 podStartE2EDuration="15.046838386s" podCreationTimestamp="2026-02-17 16:09:53 +0000 UTC" firstStartedPulling="2026-02-17 16:09:53.823158781 +0000 UTC m=+957.339517854" lastFinishedPulling="2026-02-17 16:10:02.842999383 +0000 UTC m=+966.359358486" observedRunningTime="2026-02-17 16:10:08.035333379 +0000 UTC m=+971.551692492" watchObservedRunningTime="2026-02-17 16:10:08.046838386 +0000 UTC m=+971.563197489" Feb 17 16:10:08 crc kubenswrapper[4808]: I0217 16:10:08.669810 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-c58vl" Feb 17 16:10:08 crc kubenswrapper[4808]: I0217 16:10:08.708560 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-c58vl" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.382829 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b22t4"] Feb 17 16:10:09 crc kubenswrapper[4808]: E0217 16:10:09.383813 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="extract-utilities" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.383837 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="extract-utilities" Feb 17 16:10:09 crc kubenswrapper[4808]: E0217 16:10:09.383866 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="extract-content" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.383875 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="extract-content" Feb 17 16:10:09 crc kubenswrapper[4808]: E0217 16:10:09.383888 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="registry-server" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.383897 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="registry-server" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.384075 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cdb188e-770b-4b77-8396-a2422be880a4" containerName="registry-server" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.385497 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.399190 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b22t4"] Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.411235 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-catalog-content\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.411358 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-utilities\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.411432 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-kube-api-access-5cdlt\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.512732 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-catalog-content\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.512803 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-utilities\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.512834 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-kube-api-access-5cdlt\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.513351 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-catalog-content\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.513467 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-utilities\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.534418 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-kube-api-access-5cdlt\") pod \"redhat-marketplace-b22t4\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.707830 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:09 crc kubenswrapper[4808]: I0217 16:10:09.974241 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b22t4"] Feb 17 16:10:10 crc kubenswrapper[4808]: I0217 16:10:10.027709 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b22t4" event={"ID":"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab","Type":"ContainerStarted","Data":"0a8484881de1d70ec07925f19c53404a1c36bb6e619b19475afa5fa460840f39"} Feb 17 16:10:11 crc kubenswrapper[4808]: I0217 16:10:11.041419 4808 generic.go:334] "Generic (PLEG): container finished" podID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerID="ca6dad098d98000904ee193800e5cff6af216019d831be3c7082c77fd328066f" exitCode=0 Feb 17 16:10:11 crc kubenswrapper[4808]: I0217 16:10:11.041541 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b22t4" event={"ID":"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab","Type":"ContainerDied","Data":"ca6dad098d98000904ee193800e5cff6af216019d831be3c7082c77fd328066f"} Feb 17 16:10:12 crc kubenswrapper[4808]: I0217 16:10:12.057969 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b22t4" event={"ID":"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab","Type":"ContainerStarted","Data":"55a1ebe71976ac4d4cadff189408da31cb22e90fad0ba07ebd0b581c8feed71f"} Feb 17 16:10:13 crc kubenswrapper[4808]: I0217 16:10:13.069373 4808 generic.go:334] "Generic (PLEG): container finished" podID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerID="55a1ebe71976ac4d4cadff189408da31cb22e90fad0ba07ebd0b581c8feed71f" exitCode=0 Feb 17 16:10:13 crc kubenswrapper[4808]: I0217 16:10:13.069439 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b22t4" event={"ID":"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab","Type":"ContainerDied","Data":"55a1ebe71976ac4d4cadff189408da31cb22e90fad0ba07ebd0b581c8feed71f"} Feb 17 16:10:13 crc kubenswrapper[4808]: I0217 16:10:13.700052 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-zvr84" Feb 17 16:10:14 crc kubenswrapper[4808]: I0217 16:10:14.080717 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b22t4" event={"ID":"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab","Type":"ContainerStarted","Data":"4b75f60011cf28fb63cb77cf2df3af9aa65761ca0c8e7f1ad61a06e169e399ec"} Feb 17 16:10:14 crc kubenswrapper[4808]: I0217 16:10:14.103659 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b22t4" podStartSLOduration=2.67489184 podStartE2EDuration="5.103641552s" podCreationTimestamp="2026-02-17 16:10:09 +0000 UTC" firstStartedPulling="2026-02-17 16:10:11.044457966 +0000 UTC m=+974.560817079" lastFinishedPulling="2026-02-17 16:10:13.473207678 +0000 UTC m=+976.989566791" observedRunningTime="2026-02-17 16:10:14.101400322 +0000 UTC m=+977.617759405" watchObservedRunningTime="2026-02-17 16:10:14.103641552 +0000 UTC m=+977.620000635" Feb 17 16:10:14 crc kubenswrapper[4808]: I0217 16:10:14.426106 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-jvlrt" Feb 17 16:10:17 crc kubenswrapper[4808]: I0217 16:10:17.393759 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2hrgh" Feb 17 16:10:19 crc kubenswrapper[4808]: I0217 16:10:19.708992 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:19 crc kubenswrapper[4808]: I0217 16:10:19.709891 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:19 crc kubenswrapper[4808]: I0217 16:10:19.773631 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.190350 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qnxrh"] Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.193706 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnxrh" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.200142 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qnxrh"] Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.203869 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.204093 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.208512 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ms6kq" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.236349 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.277011 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxsv8\" (UniqueName: \"kubernetes.io/projected/0ac34750-b7bc-47ce-b128-10bfc5e9c8cf-kube-api-access-jxsv8\") pod \"openstack-operator-index-qnxrh\" (UID: \"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf\") " pod="openstack-operators/openstack-operator-index-qnxrh" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.379102 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxsv8\" (UniqueName: \"kubernetes.io/projected/0ac34750-b7bc-47ce-b128-10bfc5e9c8cf-kube-api-access-jxsv8\") pod \"openstack-operator-index-qnxrh\" (UID: \"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf\") " pod="openstack-operators/openstack-operator-index-qnxrh" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.396398 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxsv8\" (UniqueName: \"kubernetes.io/projected/0ac34750-b7bc-47ce-b128-10bfc5e9c8cf-kube-api-access-jxsv8\") pod \"openstack-operator-index-qnxrh\" (UID: \"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf\") " pod="openstack-operators/openstack-operator-index-qnxrh" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.518324 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnxrh" Feb 17 16:10:20 crc kubenswrapper[4808]: I0217 16:10:20.963040 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qnxrh"] Feb 17 16:10:20 crc kubenswrapper[4808]: W0217 16:10:20.979034 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ac34750_b7bc_47ce_b128_10bfc5e9c8cf.slice/crio-7dfd596ecefcb9b7f65cea8307ce1e80d8368db6e2b668556763a24fcc94dd30 WatchSource:0}: Error finding container 7dfd596ecefcb9b7f65cea8307ce1e80d8368db6e2b668556763a24fcc94dd30: Status 404 returned error can't find the container with id 7dfd596ecefcb9b7f65cea8307ce1e80d8368db6e2b668556763a24fcc94dd30 Feb 17 16:10:21 crc kubenswrapper[4808]: I0217 16:10:21.156227 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnxrh" event={"ID":"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf","Type":"ContainerStarted","Data":"7dfd596ecefcb9b7f65cea8307ce1e80d8368db6e2b668556763a24fcc94dd30"} Feb 17 16:10:23 crc kubenswrapper[4808]: I0217 16:10:23.675466 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-c58vl" Feb 17 16:10:24 crc kubenswrapper[4808]: I0217 16:10:24.187069 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnxrh" event={"ID":"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf","Type":"ContainerStarted","Data":"5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b"} Feb 17 16:10:24 crc kubenswrapper[4808]: I0217 16:10:24.216288 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qnxrh" podStartSLOduration=1.346360115 podStartE2EDuration="4.21626352s" podCreationTimestamp="2026-02-17 16:10:20 +0000 UTC" firstStartedPulling="2026-02-17 16:10:20.982968599 +0000 UTC m=+984.499327682" lastFinishedPulling="2026-02-17 16:10:23.852872004 +0000 UTC m=+987.369231087" observedRunningTime="2026-02-17 16:10:24.199350258 +0000 UTC m=+987.715709361" watchObservedRunningTime="2026-02-17 16:10:24.21626352 +0000 UTC m=+987.732622623" Feb 17 16:10:24 crc kubenswrapper[4808]: I0217 16:10:24.813604 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b22t4"] Feb 17 16:10:24 crc kubenswrapper[4808]: I0217 16:10:24.813865 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b22t4" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="registry-server" containerID="cri-o://4b75f60011cf28fb63cb77cf2df3af9aa65761ca0c8e7f1ad61a06e169e399ec" gracePeriod=2 Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.019563 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qnxrh"] Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.200201 4808 generic.go:334] "Generic (PLEG): container finished" podID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerID="4b75f60011cf28fb63cb77cf2df3af9aa65761ca0c8e7f1ad61a06e169e399ec" exitCode=0 Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.200245 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b22t4" event={"ID":"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab","Type":"ContainerDied","Data":"4b75f60011cf28fb63cb77cf2df3af9aa65761ca0c8e7f1ad61a06e169e399ec"} Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.273477 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.354226 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-catalog-content\") pod \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.354316 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-kube-api-access-5cdlt\") pod \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.354568 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-utilities\") pod \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\" (UID: \"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab\") " Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.355769 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-utilities" (OuterVolumeSpecName: "utilities") pod "6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" (UID: "6a3e6872-b5f3-4b28-abb5-3f721a69d3ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.364139 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-kube-api-access-5cdlt" (OuterVolumeSpecName: "kube-api-access-5cdlt") pod "6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" (UID: "6a3e6872-b5f3-4b28-abb5-3f721a69d3ab"). InnerVolumeSpecName "kube-api-access-5cdlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.389163 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" (UID: "6a3e6872-b5f3-4b28-abb5-3f721a69d3ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.458462 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.458523 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.458546 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cdlt\" (UniqueName: \"kubernetes.io/projected/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab-kube-api-access-5cdlt\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.821698 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-75t5f"] Feb 17 16:10:25 crc kubenswrapper[4808]: E0217 16:10:25.822135 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="extract-content" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.822169 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="extract-content" Feb 17 16:10:25 crc kubenswrapper[4808]: E0217 16:10:25.822198 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="extract-utilities" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.822215 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="extract-utilities" Feb 17 16:10:25 crc kubenswrapper[4808]: E0217 16:10:25.822240 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="registry-server" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.822262 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="registry-server" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.822524 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" containerName="registry-server" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.823265 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.837519 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-75t5f"] Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.880384 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlh5x\" (UniqueName: \"kubernetes.io/projected/aa72ff82-f411-42f6-8144-937ca196211b-kube-api-access-mlh5x\") pod \"openstack-operator-index-75t5f\" (UID: \"aa72ff82-f411-42f6-8144-937ca196211b\") " pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:25 crc kubenswrapper[4808]: I0217 16:10:25.982547 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlh5x\" (UniqueName: \"kubernetes.io/projected/aa72ff82-f411-42f6-8144-937ca196211b-kube-api-access-mlh5x\") pod \"openstack-operator-index-75t5f\" (UID: \"aa72ff82-f411-42f6-8144-937ca196211b\") " pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.007709 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlh5x\" (UniqueName: \"kubernetes.io/projected/aa72ff82-f411-42f6-8144-937ca196211b-kube-api-access-mlh5x\") pod \"openstack-operator-index-75t5f\" (UID: \"aa72ff82-f411-42f6-8144-937ca196211b\") " pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.191127 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.213405 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b22t4" event={"ID":"6a3e6872-b5f3-4b28-abb5-3f721a69d3ab","Type":"ContainerDied","Data":"0a8484881de1d70ec07925f19c53404a1c36bb6e619b19475afa5fa460840f39"} Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.213488 4808 scope.go:117] "RemoveContainer" containerID="4b75f60011cf28fb63cb77cf2df3af9aa65761ca0c8e7f1ad61a06e169e399ec" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.213515 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-qnxrh" podUID="0ac34750-b7bc-47ce-b128-10bfc5e9c8cf" containerName="registry-server" containerID="cri-o://5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b" gracePeriod=2 Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.213421 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b22t4" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.411339 4808 scope.go:117] "RemoveContainer" containerID="55a1ebe71976ac4d4cadff189408da31cb22e90fad0ba07ebd0b581c8feed71f" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.416774 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b22t4"] Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.432187 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b22t4"] Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.452874 4808 scope.go:117] "RemoveContainer" containerID="ca6dad098d98000904ee193800e5cff6af216019d831be3c7082c77fd328066f" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.644542 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnxrh" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.723000 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-75t5f"] Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.795901 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxsv8\" (UniqueName: \"kubernetes.io/projected/0ac34750-b7bc-47ce-b128-10bfc5e9c8cf-kube-api-access-jxsv8\") pod \"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf\" (UID: \"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf\") " Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.803937 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ac34750-b7bc-47ce-b128-10bfc5e9c8cf-kube-api-access-jxsv8" (OuterVolumeSpecName: "kube-api-access-jxsv8") pod "0ac34750-b7bc-47ce-b128-10bfc5e9c8cf" (UID: "0ac34750-b7bc-47ce-b128-10bfc5e9c8cf"). InnerVolumeSpecName "kube-api-access-jxsv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:26 crc kubenswrapper[4808]: I0217 16:10:26.897398 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxsv8\" (UniqueName: \"kubernetes.io/projected/0ac34750-b7bc-47ce-b128-10bfc5e9c8cf-kube-api-access-jxsv8\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.159670 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a3e6872-b5f3-4b28-abb5-3f721a69d3ab" path="/var/lib/kubelet/pods/6a3e6872-b5f3-4b28-abb5-3f721a69d3ab/volumes" Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.222292 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-75t5f" event={"ID":"aa72ff82-f411-42f6-8144-937ca196211b","Type":"ContainerStarted","Data":"4b501886246d01696f527c3a9eef623152d64357a2a30f6ac7df3bb823cc2733"} Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.222351 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-75t5f" event={"ID":"aa72ff82-f411-42f6-8144-937ca196211b","Type":"ContainerStarted","Data":"e480968f2a83761fae875c8aed263cfc3dbabd013086bca5f6877d0d3a930751"} Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.224843 4808 generic.go:334] "Generic (PLEG): container finished" podID="0ac34750-b7bc-47ce-b128-10bfc5e9c8cf" containerID="5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b" exitCode=0 Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.224904 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnxrh" event={"ID":"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf","Type":"ContainerDied","Data":"5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b"} Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.224918 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qnxrh" Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.224945 4808 scope.go:117] "RemoveContainer" containerID="5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b" Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.224934 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qnxrh" event={"ID":"0ac34750-b7bc-47ce-b128-10bfc5e9c8cf","Type":"ContainerDied","Data":"7dfd596ecefcb9b7f65cea8307ce1e80d8368db6e2b668556763a24fcc94dd30"} Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.250042 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-75t5f" podStartSLOduration=2.206566054 podStartE2EDuration="2.250002336s" podCreationTimestamp="2026-02-17 16:10:25 +0000 UTC" firstStartedPulling="2026-02-17 16:10:26.729920111 +0000 UTC m=+990.246279194" lastFinishedPulling="2026-02-17 16:10:26.773356383 +0000 UTC m=+990.289715476" observedRunningTime="2026-02-17 16:10:27.243643945 +0000 UTC m=+990.760003108" watchObservedRunningTime="2026-02-17 16:10:27.250002336 +0000 UTC m=+990.766361409" Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.252815 4808 scope.go:117] "RemoveContainer" containerID="5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b" Feb 17 16:10:27 crc kubenswrapper[4808]: E0217 16:10:27.253593 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b\": container with ID starting with 5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b not found: ID does not exist" containerID="5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b" Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.253652 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b"} err="failed to get container status \"5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b\": rpc error: code = NotFound desc = could not find container \"5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b\": container with ID starting with 5fd188d3caa8e18b174683f34f9ee94fa3c92333f2404e05b31941a90f76d47b not found: ID does not exist" Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.268715 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-qnxrh"] Feb 17 16:10:27 crc kubenswrapper[4808]: I0217 16:10:27.274131 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-qnxrh"] Feb 17 16:10:29 crc kubenswrapper[4808]: I0217 16:10:29.160117 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ac34750-b7bc-47ce-b128-10bfc5e9c8cf" path="/var/lib/kubelet/pods/0ac34750-b7bc-47ce-b128-10bfc5e9c8cf/volumes" Feb 17 16:10:36 crc kubenswrapper[4808]: I0217 16:10:36.191528 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:36 crc kubenswrapper[4808]: I0217 16:10:36.192200 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:36 crc kubenswrapper[4808]: I0217 16:10:36.229714 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:36 crc kubenswrapper[4808]: I0217 16:10:36.351841 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-75t5f" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.482238 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6"] Feb 17 16:10:39 crc kubenswrapper[4808]: E0217 16:10:39.483211 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ac34750-b7bc-47ce-b128-10bfc5e9c8cf" containerName="registry-server" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.483238 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ac34750-b7bc-47ce-b128-10bfc5e9c8cf" containerName="registry-server" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.483484 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ac34750-b7bc-47ce-b128-10bfc5e9c8cf" containerName="registry-server" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.485069 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.493060 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tgxbp" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.497680 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6"] Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.604496 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.604914 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9vk9\" (UniqueName: \"kubernetes.io/projected/bb0fef44-0d18-499b-bfd1-c684136b5095-kube-api-access-v9vk9\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.605081 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.707676 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9vk9\" (UniqueName: \"kubernetes.io/projected/bb0fef44-0d18-499b-bfd1-c684136b5095-kube-api-access-v9vk9\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.707751 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.707783 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.708338 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-util\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.708429 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-bundle\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.737862 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9vk9\" (UniqueName: \"kubernetes.io/projected/bb0fef44-0d18-499b-bfd1-c684136b5095-kube-api-access-v9vk9\") pod \"3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:39 crc kubenswrapper[4808]: I0217 16:10:39.825104 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:40 crc kubenswrapper[4808]: I0217 16:10:40.032801 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6"] Feb 17 16:10:40 crc kubenswrapper[4808]: W0217 16:10:40.037835 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb0fef44_0d18_499b_bfd1_c684136b5095.slice/crio-10e5456f99a362bdb2cccb0bef512371f03322ceb2c84d4693eab11d788303e0 WatchSource:0}: Error finding container 10e5456f99a362bdb2cccb0bef512371f03322ceb2c84d4693eab11d788303e0: Status 404 returned error can't find the container with id 10e5456f99a362bdb2cccb0bef512371f03322ceb2c84d4693eab11d788303e0 Feb 17 16:10:40 crc kubenswrapper[4808]: I0217 16:10:40.346811 4808 generic.go:334] "Generic (PLEG): container finished" podID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerID="6798e3bf9c5690ebcf16cfb39c9e927164cf9e99a1661245cb27ffb486e54af4" exitCode=0 Feb 17 16:10:40 crc kubenswrapper[4808]: I0217 16:10:40.346873 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" event={"ID":"bb0fef44-0d18-499b-bfd1-c684136b5095","Type":"ContainerDied","Data":"6798e3bf9c5690ebcf16cfb39c9e927164cf9e99a1661245cb27ffb486e54af4"} Feb 17 16:10:40 crc kubenswrapper[4808]: I0217 16:10:40.346917 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" event={"ID":"bb0fef44-0d18-499b-bfd1-c684136b5095","Type":"ContainerStarted","Data":"10e5456f99a362bdb2cccb0bef512371f03322ceb2c84d4693eab11d788303e0"} Feb 17 16:10:41 crc kubenswrapper[4808]: I0217 16:10:41.357956 4808 generic.go:334] "Generic (PLEG): container finished" podID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerID="288ee85f720f8808086f4f8617e718281d80757ee7bb3a062de0a4491fa40350" exitCode=0 Feb 17 16:10:41 crc kubenswrapper[4808]: I0217 16:10:41.358070 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" event={"ID":"bb0fef44-0d18-499b-bfd1-c684136b5095","Type":"ContainerDied","Data":"288ee85f720f8808086f4f8617e718281d80757ee7bb3a062de0a4491fa40350"} Feb 17 16:10:42 crc kubenswrapper[4808]: I0217 16:10:42.369140 4808 generic.go:334] "Generic (PLEG): container finished" podID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerID="1eacd106543c3b1eed89107e6d211095f090e29dbc2bc301fc76abb05f46fa29" exitCode=0 Feb 17 16:10:42 crc kubenswrapper[4808]: I0217 16:10:42.369226 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" event={"ID":"bb0fef44-0d18-499b-bfd1-c684136b5095","Type":"ContainerDied","Data":"1eacd106543c3b1eed89107e6d211095f090e29dbc2bc301fc76abb05f46fa29"} Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.695137 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.877686 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-bundle\") pod \"bb0fef44-0d18-499b-bfd1-c684136b5095\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.877758 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9vk9\" (UniqueName: \"kubernetes.io/projected/bb0fef44-0d18-499b-bfd1-c684136b5095-kube-api-access-v9vk9\") pod \"bb0fef44-0d18-499b-bfd1-c684136b5095\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.877833 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-util\") pod \"bb0fef44-0d18-499b-bfd1-c684136b5095\" (UID: \"bb0fef44-0d18-499b-bfd1-c684136b5095\") " Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.878468 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-bundle" (OuterVolumeSpecName: "bundle") pod "bb0fef44-0d18-499b-bfd1-c684136b5095" (UID: "bb0fef44-0d18-499b-bfd1-c684136b5095"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.886744 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0fef44-0d18-499b-bfd1-c684136b5095-kube-api-access-v9vk9" (OuterVolumeSpecName: "kube-api-access-v9vk9") pod "bb0fef44-0d18-499b-bfd1-c684136b5095" (UID: "bb0fef44-0d18-499b-bfd1-c684136b5095"). InnerVolumeSpecName "kube-api-access-v9vk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.898931 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-util" (OuterVolumeSpecName: "util") pod "bb0fef44-0d18-499b-bfd1-c684136b5095" (UID: "bb0fef44-0d18-499b-bfd1-c684136b5095"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.980282 4808 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.980339 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9vk9\" (UniqueName: \"kubernetes.io/projected/bb0fef44-0d18-499b-bfd1-c684136b5095-kube-api-access-v9vk9\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:43 crc kubenswrapper[4808]: I0217 16:10:43.980359 4808 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0fef44-0d18-499b-bfd1-c684136b5095-util\") on node \"crc\" DevicePath \"\"" Feb 17 16:10:44 crc kubenswrapper[4808]: I0217 16:10:44.389477 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" event={"ID":"bb0fef44-0d18-499b-bfd1-c684136b5095","Type":"ContainerDied","Data":"10e5456f99a362bdb2cccb0bef512371f03322ceb2c84d4693eab11d788303e0"} Feb 17 16:10:44 crc kubenswrapper[4808]: I0217 16:10:44.389540 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10e5456f99a362bdb2cccb0bef512371f03322ceb2c84d4693eab11d788303e0" Feb 17 16:10:44 crc kubenswrapper[4808]: I0217 16:10:44.389563 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.174107 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9"] Feb 17 16:10:48 crc kubenswrapper[4808]: E0217 16:10:48.175009 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerName="pull" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.175027 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerName="pull" Feb 17 16:10:48 crc kubenswrapper[4808]: E0217 16:10:48.175057 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerName="extract" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.175066 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerName="extract" Feb 17 16:10:48 crc kubenswrapper[4808]: E0217 16:10:48.175078 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerName="util" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.175087 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerName="util" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.175218 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0fef44-0d18-499b-bfd1-c684136b5095" containerName="extract" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.175770 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.181824 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-n8kv8" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.207555 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9"] Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.246280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4lr2\" (UniqueName: \"kubernetes.io/projected/2db6cd8b-961f-442e-8bd4-ced98807709a-kube-api-access-m4lr2\") pod \"openstack-operator-controller-init-64549bfd8b-rwgq9\" (UID: \"2db6cd8b-961f-442e-8bd4-ced98807709a\") " pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.347172 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4lr2\" (UniqueName: \"kubernetes.io/projected/2db6cd8b-961f-442e-8bd4-ced98807709a-kube-api-access-m4lr2\") pod \"openstack-operator-controller-init-64549bfd8b-rwgq9\" (UID: \"2db6cd8b-961f-442e-8bd4-ced98807709a\") " pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.371233 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4lr2\" (UniqueName: \"kubernetes.io/projected/2db6cd8b-961f-442e-8bd4-ced98807709a-kube-api-access-m4lr2\") pod \"openstack-operator-controller-init-64549bfd8b-rwgq9\" (UID: \"2db6cd8b-961f-442e-8bd4-ced98807709a\") " pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.499963 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.745650 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9"] Feb 17 16:10:48 crc kubenswrapper[4808]: I0217 16:10:48.758156 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:10:49 crc kubenswrapper[4808]: I0217 16:10:49.431550 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" event={"ID":"2db6cd8b-961f-442e-8bd4-ced98807709a","Type":"ContainerStarted","Data":"26f0e5f51901ff6ef8217fa5621b4138098c6548eb8cef1a8ec924e81786dfd1"} Feb 17 16:10:53 crc kubenswrapper[4808]: I0217 16:10:53.480140 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" event={"ID":"2db6cd8b-961f-442e-8bd4-ced98807709a","Type":"ContainerStarted","Data":"3d9a365357cef78af96c30126ed8d78286157969a21437877031df0b49d4f50f"} Feb 17 16:10:53 crc kubenswrapper[4808]: I0217 16:10:53.480933 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" Feb 17 16:10:53 crc kubenswrapper[4808]: I0217 16:10:53.532248 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" podStartSLOduration=1.791937088 podStartE2EDuration="5.532216413s" podCreationTimestamp="2026-02-17 16:10:48 +0000 UTC" firstStartedPulling="2026-02-17 16:10:48.757951244 +0000 UTC m=+1012.274310317" lastFinishedPulling="2026-02-17 16:10:52.498230569 +0000 UTC m=+1016.014589642" observedRunningTime="2026-02-17 16:10:53.523894701 +0000 UTC m=+1017.040253874" watchObservedRunningTime="2026-02-17 16:10:53.532216413 +0000 UTC m=+1017.048575526" Feb 17 16:10:58 crc kubenswrapper[4808]: I0217 16:10:58.503241 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-64549bfd8b-rwgq9" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.046015 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.048144 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.050813 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-4b5sk" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.061288 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.062253 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.067141 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-t9jrj" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.071871 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.078186 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.102384 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xltm6\" (UniqueName: \"kubernetes.io/projected/3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb-kube-api-access-xltm6\") pod \"barbican-operator-controller-manager-868647ff47-cjh7p\" (UID: \"3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.102470 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm6jv\" (UniqueName: \"kubernetes.io/projected/77df5d1f-daff-4508-861a-335ab87f2366-kube-api-access-tm6jv\") pod \"cinder-operator-controller-manager-5d946d989d-4cv77\" (UID: \"77df5d1f-daff-4508-861a-335ab87f2366\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.132137 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.133146 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.138117 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-qxn4p" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.145778 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.181084 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.181939 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.186273 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-br5nd" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.201401 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.209212 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xltm6\" (UniqueName: \"kubernetes.io/projected/3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb-kube-api-access-xltm6\") pod \"barbican-operator-controller-manager-868647ff47-cjh7p\" (UID: \"3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.209262 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm6jv\" (UniqueName: \"kubernetes.io/projected/77df5d1f-daff-4508-861a-335ab87f2366-kube-api-access-tm6jv\") pod \"cinder-operator-controller-manager-5d946d989d-4cv77\" (UID: \"77df5d1f-daff-4508-861a-335ab87f2366\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.209333 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5lz8\" (UniqueName: \"kubernetes.io/projected/e2e1b5f4-7ed2-4ab1-871b-1974a7559252-kube-api-access-b5lz8\") pod \"designate-operator-controller-manager-6d8bf5c495-gl97b\" (UID: \"e2e1b5f4-7ed2-4ab1-871b-1974a7559252\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.246704 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xltm6\" (UniqueName: \"kubernetes.io/projected/3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb-kube-api-access-xltm6\") pod \"barbican-operator-controller-manager-868647ff47-cjh7p\" (UID: \"3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.247153 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm6jv\" (UniqueName: \"kubernetes.io/projected/77df5d1f-daff-4508-861a-335ab87f2366-kube-api-access-tm6jv\") pod \"cinder-operator-controller-manager-5d946d989d-4cv77\" (UID: \"77df5d1f-daff-4508-861a-335ab87f2366\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.265322 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xv924"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.274863 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.291784 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.292704 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.310839 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5lz8\" (UniqueName: \"kubernetes.io/projected/e2e1b5f4-7ed2-4ab1-871b-1974a7559252-kube-api-access-b5lz8\") pod \"designate-operator-controller-manager-6d8bf5c495-gl97b\" (UID: \"e2e1b5f4-7ed2-4ab1-871b-1974a7559252\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.310972 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb8xj\" (UniqueName: \"kubernetes.io/projected/b622bb16-c5b4-45ea-b493-e681d36d49ac-kube-api-access-fb8xj\") pod \"glance-operator-controller-manager-77987464f4-b7hkk\" (UID: \"b622bb16-c5b4-45ea-b493-e681d36d49ac\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.320953 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-nwh6w" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.321186 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-cgr2n" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.344632 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xv924"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.357539 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.361148 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5lz8\" (UniqueName: \"kubernetes.io/projected/e2e1b5f4-7ed2-4ab1-871b-1974a7559252-kube-api-access-b5lz8\") pod \"designate-operator-controller-manager-6d8bf5c495-gl97b\" (UID: \"e2e1b5f4-7ed2-4ab1-871b-1974a7559252\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.387859 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.388521 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.395292 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.395332 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.395350 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.397170 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.398640 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.400289 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.400470 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-b2sv9" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.400639 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-4cfvg" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.409302 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.412766 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rqg\" (UniqueName: \"kubernetes.io/projected/681f334b-d0ac-43dc-babb-92d9cb7c0440-kube-api-access-v4rqg\") pod \"horizon-operator-controller-manager-5b9b8895d5-plpr2\" (UID: \"681f334b-d0ac-43dc-babb-92d9cb7c0440\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.412858 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb8xj\" (UniqueName: \"kubernetes.io/projected/b622bb16-c5b4-45ea-b493-e681d36d49ac-kube-api-access-fb8xj\") pod \"glance-operator-controller-manager-77987464f4-b7hkk\" (UID: \"b622bb16-c5b4-45ea-b493-e681d36d49ac\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.412897 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65c7l\" (UniqueName: \"kubernetes.io/projected/d4bd0818-617e-418a-b7c7-f70ba7ebc3d8-kube-api-access-65c7l\") pod \"heat-operator-controller-manager-69f49c598c-xv924\" (UID: \"d4bd0818-617e-418a-b7c7-f70ba7ebc3d8\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.453237 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb8xj\" (UniqueName: \"kubernetes.io/projected/b622bb16-c5b4-45ea-b493-e681d36d49ac-kube-api-access-fb8xj\") pod \"glance-operator-controller-manager-77987464f4-b7hkk\" (UID: \"b622bb16-c5b4-45ea-b493-e681d36d49ac\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.453471 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.454439 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.457648 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-754r7" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.467329 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.473051 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.494764 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.496059 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.504946 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-l4w9c" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.508797 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.522244 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65c7l\" (UniqueName: \"kubernetes.io/projected/d4bd0818-617e-418a-b7c7-f70ba7ebc3d8-kube-api-access-65c7l\") pod \"heat-operator-controller-manager-69f49c598c-xv924\" (UID: \"d4bd0818-617e-418a-b7c7-f70ba7ebc3d8\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.522307 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jgrh\" (UniqueName: \"kubernetes.io/projected/96baec58-63b9-49cd-9cf4-32639e58d4ac-kube-api-access-4jgrh\") pod \"keystone-operator-controller-manager-b4d948c87-8xfc6\" (UID: \"96baec58-63b9-49cd-9cf4-32639e58d4ac\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.522332 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.522384 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zfgx\" (UniqueName: \"kubernetes.io/projected/ace1fd54-7ff8-45b9-a77b-c3908044365e-kube-api-access-2zfgx\") pod \"ironic-operator-controller-manager-554564d7fc-thpj7\" (UID: \"ace1fd54-7ff8-45b9-a77b-c3908044365e\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.522406 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4rqg\" (UniqueName: \"kubernetes.io/projected/681f334b-d0ac-43dc-babb-92d9cb7c0440-kube-api-access-v4rqg\") pod \"horizon-operator-controller-manager-5b9b8895d5-plpr2\" (UID: \"681f334b-d0ac-43dc-babb-92d9cb7c0440\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.522451 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czn74\" (UniqueName: \"kubernetes.io/projected/6508a74d-2dba-4d1b-910c-95c9463c15a4-kube-api-access-czn74\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.526656 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.527864 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.544633 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.544995 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-wkk2j" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.545747 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.571296 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4rqg\" (UniqueName: \"kubernetes.io/projected/681f334b-d0ac-43dc-babb-92d9cb7c0440-kube-api-access-v4rqg\") pod \"horizon-operator-controller-manager-5b9b8895d5-plpr2\" (UID: \"681f334b-d0ac-43dc-babb-92d9cb7c0440\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.577115 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65c7l\" (UniqueName: \"kubernetes.io/projected/d4bd0818-617e-418a-b7c7-f70ba7ebc3d8-kube-api-access-65c7l\") pod \"heat-operator-controller-manager-69f49c598c-xv924\" (UID: \"d4bd0818-617e-418a-b7c7-f70ba7ebc3d8\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.582635 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.583726 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.585337 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.596663 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-jltzm" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.604327 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.608167 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-rrpvb" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.626879 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zfgx\" (UniqueName: \"kubernetes.io/projected/ace1fd54-7ff8-45b9-a77b-c3908044365e-kube-api-access-2zfgx\") pod \"ironic-operator-controller-manager-554564d7fc-thpj7\" (UID: \"ace1fd54-7ff8-45b9-a77b-c3908044365e\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.626961 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8dhp\" (UniqueName: \"kubernetes.io/projected/a6f8ca14-e1db-4dcc-a64d-7bf137105e80-kube-api-access-w8dhp\") pod \"nova-operator-controller-manager-567668f5cf-t9k25\" (UID: \"a6f8ca14-e1db-4dcc-a64d-7bf137105e80\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.627024 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czn74\" (UniqueName: \"kubernetes.io/projected/6508a74d-2dba-4d1b-910c-95c9463c15a4-kube-api-access-czn74\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.627058 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhp8l\" (UniqueName: \"kubernetes.io/projected/a40e52a1-9867-413a-81fb-324789e0a009-kube-api-access-dhp8l\") pod \"mariadb-operator-controller-manager-6994f66f48-vgbmj\" (UID: \"a40e52a1-9867-413a-81fb-324789e0a009\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.627100 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jgrh\" (UniqueName: \"kubernetes.io/projected/96baec58-63b9-49cd-9cf4-32639e58d4ac-kube-api-access-4jgrh\") pod \"keystone-operator-controller-manager-b4d948c87-8xfc6\" (UID: \"96baec58-63b9-49cd-9cf4-32639e58d4ac\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.627126 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.627158 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgkjh\" (UniqueName: \"kubernetes.io/projected/93278ccd-52fe-4848-9a46-3f47369d47ab-kube-api-access-hgkjh\") pod \"manila-operator-controller-manager-54f6768c69-tkhr5\" (UID: \"93278ccd-52fe-4848-9a46-3f47369d47ab\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" Feb 17 16:11:19 crc kubenswrapper[4808]: E0217 16:11:19.627684 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:19 crc kubenswrapper[4808]: E0217 16:11:19.627751 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert podName:6508a74d-2dba-4d1b-910c-95c9463c15a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:20.127732723 +0000 UTC m=+1043.644091796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert") pod "infra-operator-controller-manager-79d975b745-n6qxn" (UID: "6508a74d-2dba-4d1b-910c-95c9463c15a4") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.635165 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.636005 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.640150 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-82s6w" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.643529 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.649697 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.661382 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.682727 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czn74\" (UniqueName: \"kubernetes.io/projected/6508a74d-2dba-4d1b-910c-95c9463c15a4-kube-api-access-czn74\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.684146 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jgrh\" (UniqueName: \"kubernetes.io/projected/96baec58-63b9-49cd-9cf4-32639e58d4ac-kube-api-access-4jgrh\") pod \"keystone-operator-controller-manager-b4d948c87-8xfc6\" (UID: \"96baec58-63b9-49cd-9cf4-32639e58d4ac\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.686756 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.696715 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.700483 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.704049 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.705932 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zfgx\" (UniqueName: \"kubernetes.io/projected/ace1fd54-7ff8-45b9-a77b-c3908044365e-kube-api-access-2zfgx\") pod \"ironic-operator-controller-manager-554564d7fc-thpj7\" (UID: \"ace1fd54-7ff8-45b9-a77b-c3908044365e\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.706721 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tkkqw" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.706834 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.729181 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgkjh\" (UniqueName: \"kubernetes.io/projected/93278ccd-52fe-4848-9a46-3f47369d47ab-kube-api-access-hgkjh\") pod \"manila-operator-controller-manager-54f6768c69-tkhr5\" (UID: \"93278ccd-52fe-4848-9a46-3f47369d47ab\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.732997 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.733074 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz9k7\" (UniqueName: \"kubernetes.io/projected/a2547c9d-80d6-491d-8517-26327e35a1f4-kube-api-access-jz9k7\") pod \"octavia-operator-controller-manager-69f8888797-xp9sf\" (UID: \"a2547c9d-80d6-491d-8517-26327e35a1f4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.733179 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.733849 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8dhp\" (UniqueName: \"kubernetes.io/projected/a6f8ca14-e1db-4dcc-a64d-7bf137105e80-kube-api-access-w8dhp\") pod \"nova-operator-controller-manager-567668f5cf-t9k25\" (UID: \"a6f8ca14-e1db-4dcc-a64d-7bf137105e80\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.733965 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhp8l\" (UniqueName: \"kubernetes.io/projected/a40e52a1-9867-413a-81fb-324789e0a009-kube-api-access-dhp8l\") pod \"mariadb-operator-controller-manager-6994f66f48-vgbmj\" (UID: \"a40e52a1-9867-413a-81fb-324789e0a009\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.734016 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xt8q\" (UniqueName: \"kubernetes.io/projected/2ec18a16-766f-4a0c-a393-0ca7a999011e-kube-api-access-4xt8q\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.734431 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6lvx\" (UniqueName: \"kubernetes.io/projected/8d4c91a6-8441-45a6-bb6a-7655ba464fb9-kube-api-access-s6lvx\") pod \"neutron-operator-controller-manager-64ddbf8bb-kg6xx\" (UID: \"8d4c91a6-8441-45a6-bb6a-7655ba464fb9\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.739551 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.766197 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgkjh\" (UniqueName: \"kubernetes.io/projected/93278ccd-52fe-4848-9a46-3f47369d47ab-kube-api-access-hgkjh\") pod \"manila-operator-controller-manager-54f6768c69-tkhr5\" (UID: \"93278ccd-52fe-4848-9a46-3f47369d47ab\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.789888 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.812336 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhp8l\" (UniqueName: \"kubernetes.io/projected/a40e52a1-9867-413a-81fb-324789e0a009-kube-api-access-dhp8l\") pod \"mariadb-operator-controller-manager-6994f66f48-vgbmj\" (UID: \"a40e52a1-9867-413a-81fb-324789e0a009\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.812700 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8dhp\" (UniqueName: \"kubernetes.io/projected/a6f8ca14-e1db-4dcc-a64d-7bf137105e80-kube-api-access-w8dhp\") pod \"nova-operator-controller-manager-567668f5cf-t9k25\" (UID: \"a6f8ca14-e1db-4dcc-a64d-7bf137105e80\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.813690 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.817800 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-brxds" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.831479 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.832598 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.835908 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-s8wwm" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.838424 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xt8q\" (UniqueName: \"kubernetes.io/projected/2ec18a16-766f-4a0c-a393-0ca7a999011e-kube-api-access-4xt8q\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.838467 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6lvx\" (UniqueName: \"kubernetes.io/projected/8d4c91a6-8441-45a6-bb6a-7655ba464fb9-kube-api-access-s6lvx\") pod \"neutron-operator-controller-manager-64ddbf8bb-kg6xx\" (UID: \"8d4c91a6-8441-45a6-bb6a-7655ba464fb9\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.838545 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.838566 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz9k7\" (UniqueName: \"kubernetes.io/projected/a2547c9d-80d6-491d-8517-26327e35a1f4-kube-api-access-jz9k7\") pod \"octavia-operator-controller-manager-69f8888797-xp9sf\" (UID: \"a2547c9d-80d6-491d-8517-26327e35a1f4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" Feb 17 16:11:19 crc kubenswrapper[4808]: E0217 16:11:19.839740 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:19 crc kubenswrapper[4808]: E0217 16:11:19.839785 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert podName:2ec18a16-766f-4a0c-a393-0ca7a999011e nodeName:}" failed. No retries permitted until 2026-02-17 16:11:20.339772393 +0000 UTC m=+1043.856131466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" (UID: "2ec18a16-766f-4a0c-a393-0ca7a999011e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.874671 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.875752 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.884145 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fqrkp" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.884915 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz9k7\" (UniqueName: \"kubernetes.io/projected/a2547c9d-80d6-491d-8517-26327e35a1f4-kube-api-access-jz9k7\") pod \"octavia-operator-controller-manager-69f8888797-xp9sf\" (UID: \"a2547c9d-80d6-491d-8517-26327e35a1f4\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.885562 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xt8q\" (UniqueName: \"kubernetes.io/projected/2ec18a16-766f-4a0c-a393-0ca7a999011e-kube-api-access-4xt8q\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.895639 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.906881 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6lvx\" (UniqueName: \"kubernetes.io/projected/8d4c91a6-8441-45a6-bb6a-7655ba464fb9-kube-api-access-s6lvx\") pod \"neutron-operator-controller-manager-64ddbf8bb-kg6xx\" (UID: \"8d4c91a6-8441-45a6-bb6a-7655ba464fb9\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.929639 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.943163 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-468w4\" (UniqueName: \"kubernetes.io/projected/0a170b4f-607d-4c7c-bd0c-ee6c29523b44-kube-api-access-468w4\") pod \"placement-operator-controller-manager-8497b45c89-5mm2j\" (UID: \"0a170b4f-607d-4c7c-bd0c-ee6c29523b44\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.943280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggmbl\" (UniqueName: \"kubernetes.io/projected/74dda28c-8860-440c-b97c-b16bab985ff0-kube-api-access-ggmbl\") pod \"swift-operator-controller-manager-68f46476f-z4vp8\" (UID: \"74dda28c-8860-440c-b97c-b16bab985ff0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.943304 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbdg6\" (UniqueName: \"kubernetes.io/projected/6764d3f3-5e9f-4635-973e-81324dbc8e34-kube-api-access-pbdg6\") pod \"ovn-operator-controller-manager-d44cf6b75-slw7s\" (UID: \"6764d3f3-5e9f-4635-973e-81324dbc8e34\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.949221 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.960495 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.963213 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zxqhb"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.964217 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.968675 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-tnx2g" Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.996879 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5"] Feb 17 16:11:19 crc kubenswrapper[4808]: I0217 16:11:19.998085 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.021813 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-m7nf7" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.040801 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zxqhb"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.041127 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.044142 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hx9l\" (UniqueName: \"kubernetes.io/projected/b42c0b9b-cca5-4ecb-908e-508fbf932dfe-kube-api-access-4hx9l\") pod \"test-operator-controller-manager-7866795846-zxqhb\" (UID: \"b42c0b9b-cca5-4ecb-908e-508fbf932dfe\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.044181 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggmbl\" (UniqueName: \"kubernetes.io/projected/74dda28c-8860-440c-b97c-b16bab985ff0-kube-api-access-ggmbl\") pod \"swift-operator-controller-manager-68f46476f-z4vp8\" (UID: \"74dda28c-8860-440c-b97c-b16bab985ff0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.044267 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbdg6\" (UniqueName: \"kubernetes.io/projected/6764d3f3-5e9f-4635-973e-81324dbc8e34-kube-api-access-pbdg6\") pod \"ovn-operator-controller-manager-d44cf6b75-slw7s\" (UID: \"6764d3f3-5e9f-4635-973e-81324dbc8e34\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.044325 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-468w4\" (UniqueName: \"kubernetes.io/projected/0a170b4f-607d-4c7c-bd0c-ee6c29523b44-kube-api-access-468w4\") pod \"placement-operator-controller-manager-8497b45c89-5mm2j\" (UID: \"0a170b4f-607d-4c7c-bd0c-ee6c29523b44\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.044402 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9mh5\" (UniqueName: \"kubernetes.io/projected/bdd19f1d-df45-4dda-a2bd-b14da398e043-kube-api-access-b9mh5\") pod \"telemetry-operator-controller-manager-66fcc5ff49-dnzp5\" (UID: \"bdd19f1d-df45-4dda-a2bd-b14da398e043\") " pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.048212 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.050605 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.054146 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qpc64" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.066913 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.071042 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.080300 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-468w4\" (UniqueName: \"kubernetes.io/projected/0a170b4f-607d-4c7c-bd0c-ee6c29523b44-kube-api-access-468w4\") pod \"placement-operator-controller-manager-8497b45c89-5mm2j\" (UID: \"0a170b4f-607d-4c7c-bd0c-ee6c29523b44\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.083509 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggmbl\" (UniqueName: \"kubernetes.io/projected/74dda28c-8860-440c-b97c-b16bab985ff0-kube-api-access-ggmbl\") pod \"swift-operator-controller-manager-68f46476f-z4vp8\" (UID: \"74dda28c-8860-440c-b97c-b16bab985ff0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.084747 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbdg6\" (UniqueName: \"kubernetes.io/projected/6764d3f3-5e9f-4635-973e-81324dbc8e34-kube-api-access-pbdg6\") pod \"ovn-operator-controller-manager-d44cf6b75-slw7s\" (UID: \"6764d3f3-5e9f-4635-973e-81324dbc8e34\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.089727 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.104459 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.121114 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.126948 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.128528 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.135193 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.135324 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hvdj8" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.135520 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.140158 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.150965 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw4t4\" (UniqueName: \"kubernetes.io/projected/cde66c49-b3c4-4f4f-b614-c4343d1c3732-kube-api-access-sw4t4\") pod \"watcher-operator-controller-manager-5db88f68c-5qkk2\" (UID: \"cde66c49-b3c4-4f4f-b614-c4343d1c3732\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.151021 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hx9l\" (UniqueName: \"kubernetes.io/projected/b42c0b9b-cca5-4ecb-908e-508fbf932dfe-kube-api-access-4hx9l\") pod \"test-operator-controller-manager-7866795846-zxqhb\" (UID: \"b42c0b9b-cca5-4ecb-908e-508fbf932dfe\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.151098 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.151151 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9mh5\" (UniqueName: \"kubernetes.io/projected/bdd19f1d-df45-4dda-a2bd-b14da398e043-kube-api-access-b9mh5\") pod \"telemetry-operator-controller-manager-66fcc5ff49-dnzp5\" (UID: \"bdd19f1d-df45-4dda-a2bd-b14da398e043\") " pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.151376 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.151426 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert podName:6508a74d-2dba-4d1b-910c-95c9463c15a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:21.151411392 +0000 UTC m=+1044.667770465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert") pod "infra-operator-controller-manager-79d975b745-n6qxn" (UID: "6508a74d-2dba-4d1b-910c-95c9463c15a4") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.161387 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.183389 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hx9l\" (UniqueName: \"kubernetes.io/projected/b42c0b9b-cca5-4ecb-908e-508fbf932dfe-kube-api-access-4hx9l\") pod \"test-operator-controller-manager-7866795846-zxqhb\" (UID: \"b42c0b9b-cca5-4ecb-908e-508fbf932dfe\") " pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.208111 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.208283 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9mh5\" (UniqueName: \"kubernetes.io/projected/bdd19f1d-df45-4dda-a2bd-b14da398e043-kube-api-access-b9mh5\") pod \"telemetry-operator-controller-manager-66fcc5ff49-dnzp5\" (UID: \"bdd19f1d-df45-4dda-a2bd-b14da398e043\") " pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.226226 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.227338 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.233083 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-h9q89" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.233558 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.233779 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.240535 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.252989 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.253065 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw4t4\" (UniqueName: \"kubernetes.io/projected/cde66c49-b3c4-4f4f-b614-c4343d1c3732-kube-api-access-sw4t4\") pod \"watcher-operator-controller-manager-5db88f68c-5qkk2\" (UID: \"cde66c49-b3c4-4f4f-b614-c4343d1c3732\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.253138 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.253199 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4qkl\" (UniqueName: \"kubernetes.io/projected/5e47b192-26de-4639-afe8-ec7b5fcc10c8-kube-api-access-s4qkl\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.293245 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw4t4\" (UniqueName: \"kubernetes.io/projected/cde66c49-b3c4-4f4f-b614-c4343d1c3732-kube-api-access-sw4t4\") pod \"watcher-operator-controller-manager-5db88f68c-5qkk2\" (UID: \"cde66c49-b3c4-4f4f-b614-c4343d1c3732\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.336898 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.354018 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4qkl\" (UniqueName: \"kubernetes.io/projected/5e47b192-26de-4639-afe8-ec7b5fcc10c8-kube-api-access-s4qkl\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.354119 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v54s4\" (UniqueName: \"kubernetes.io/projected/a83d92da-4f15-4e33-ab57-ae7bc9e0da5e-kube-api-access-v54s4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xcs6n\" (UID: \"a83d92da-4f15-4e33-ab57-ae7bc9e0da5e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.354155 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.354186 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.354220 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.354336 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.354380 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:20.854366936 +0000 UTC m=+1044.370726009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.354771 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.354803 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert podName:2ec18a16-766f-4a0c-a393-0ca7a999011e nodeName:}" failed. No retries permitted until 2026-02-17 16:11:21.354795498 +0000 UTC m=+1044.871154571 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" (UID: "2ec18a16-766f-4a0c-a393-0ca7a999011e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.354833 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.354948 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:20.854919421 +0000 UTC m=+1044.371278494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "metrics-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.375102 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4qkl\" (UniqueName: \"kubernetes.io/projected/5e47b192-26de-4639-afe8-ec7b5fcc10c8-kube-api-access-s4qkl\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.376893 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.413810 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.455931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v54s4\" (UniqueName: \"kubernetes.io/projected/a83d92da-4f15-4e33-ab57-ae7bc9e0da5e-kube-api-access-v54s4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xcs6n\" (UID: \"a83d92da-4f15-4e33-ab57-ae7bc9e0da5e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.504304 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v54s4\" (UniqueName: \"kubernetes.io/projected/a83d92da-4f15-4e33-ab57-ae7bc9e0da5e-kube-api-access-v54s4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xcs6n\" (UID: \"a83d92da-4f15-4e33-ab57-ae7bc9e0da5e\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.517715 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.690823 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b"] Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.869698 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: I0217 16:11:20.869772 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.869926 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.869984 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:21.869967056 +0000 UTC m=+1045.386326129 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "webhook-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.870020 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:11:20 crc kubenswrapper[4808]: E0217 16:11:20.870153 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:21.870124251 +0000 UTC m=+1045.386483324 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "metrics-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: W0217 16:11:21.138083 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96baec58_63b9_49cd_9cf4_32639e58d4ac.slice/crio-a06b2344c40b3c80d2ac34c2e98b401dd4e26a7125978d1a9e4e62233da528ac WatchSource:0}: Error finding container a06b2344c40b3c80d2ac34c2e98b401dd4e26a7125978d1a9e4e62233da528ac: Status 404 returned error can't find the container with id a06b2344c40b3c80d2ac34c2e98b401dd4e26a7125978d1a9e4e62233da528ac Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.142964 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.161701 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.175230 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.175438 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.175509 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert podName:6508a74d-2dba-4d1b-910c-95c9463c15a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:23.175488328 +0000 UTC m=+1046.691847401 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert") pod "infra-operator-controller-manager-79d975b745-n6qxn" (UID: "6508a74d-2dba-4d1b-910c-95c9463c15a4") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.191238 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p"] Feb 17 16:11:21 crc kubenswrapper[4808]: W0217 16:11:21.198967 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e657888_7f8f_4d5d_8ef3_7f7472a7e4fb.slice/crio-10a6a87584a429feaab67a052c3e03a6668a5ea86c1a7e9eccd39b814359a06f WatchSource:0}: Error finding container 10a6a87584a429feaab67a052c3e03a6668a5ea86c1a7e9eccd39b814359a06f: Status 404 returned error can't find the container with id 10a6a87584a429feaab67a052c3e03a6668a5ea86c1a7e9eccd39b814359a06f Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.201046 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xv924"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.377664 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.378081 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.378247 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert podName:2ec18a16-766f-4a0c-a393-0ca7a999011e nodeName:}" failed. No retries permitted until 2026-02-17 16:11:23.378225358 +0000 UTC m=+1046.894584441 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" (UID: "2ec18a16-766f-4a0c-a393-0ca7a999011e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.539042 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.583638 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.589900 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.605379 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.618056 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-zxqhb"] Feb 17 16:11:21 crc kubenswrapper[4808]: W0217 16:11:21.619861 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod681f334b_d0ac_43dc_babb_92d9cb7c0440.slice/crio-f5a9beb81f1dbc024a1ada7b5ff85611d38177ccd8ef673390c3ceecf75de984 WatchSource:0}: Error finding container f5a9beb81f1dbc024a1ada7b5ff85611d38177ccd8ef673390c3ceecf75de984: Status 404 returned error can't find the container with id f5a9beb81f1dbc024a1ada7b5ff85611d38177ccd8ef673390c3ceecf75de984 Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.653493 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.672876 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.674606 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.681854 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx"] Feb 17 16:11:21 crc kubenswrapper[4808]: W0217 16:11:21.718019 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda83d92da_4f15_4e33_ab57_ae7bc9e0da5e.slice/crio-49adc6d9733c663faf6a80c40c9cac3b3035952c4d651eeed023ebcf7b3b375d WatchSource:0}: Error finding container 49adc6d9733c663faf6a80c40c9cac3b3035952c4d651eeed023ebcf7b3b375d: Status 404 returned error can't find the container with id 49adc6d9733c663faf6a80c40c9cac3b3035952c4d651eeed023ebcf7b3b375d Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.723686 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.730649 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.740651 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.741460 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.760229 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" event={"ID":"b622bb16-c5b4-45ea-b493-e681d36d49ac","Type":"ContainerStarted","Data":"a013bdb67caf173970839cbc44f7c0d28e286c6c821cb41a233b48e8ffa75d00"} Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.760756 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.821812 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5"] Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.836316 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" event={"ID":"96baec58-63b9-49cd-9cf4-32639e58d4ac","Type":"ContainerStarted","Data":"a06b2344c40b3c80d2ac34c2e98b401dd4e26a7125978d1a9e4e62233da528ac"} Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.837619 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w8dhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-t9k25_openstack-operators(a6f8ca14-e1db-4dcc-a64d-7bf137105e80): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.837686 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ggmbl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-z4vp8_openstack-operators(74dda28c-8860-440c-b97c-b16bab985ff0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.837699 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2zfgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-thpj7_openstack-operators(ace1fd54-7ff8-45b9-a77b-c3908044365e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.837719 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sw4t4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-5qkk2_openstack-operators(cde66c49-b3c4-4f4f-b614-c4343d1c3732): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.838815 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" podUID="74dda28c-8860-440c-b97c-b16bab985ff0" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.838854 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" podUID="cde66c49-b3c4-4f4f-b614-c4343d1c3732" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.838948 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" podUID="a6f8ca14-e1db-4dcc-a64d-7bf137105e80" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.838974 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" podUID="ace1fd54-7ff8-45b9-a77b-c3908044365e" Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.840155 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" event={"ID":"3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb","Type":"ContainerStarted","Data":"10a6a87584a429feaab67a052c3e03a6668a5ea86c1a7e9eccd39b814359a06f"} Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.841079 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b9mh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-66fcc5ff49-dnzp5_openstack-operators(bdd19f1d-df45-4dda-a2bd-b14da398e043): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.845143 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" podUID="bdd19f1d-df45-4dda-a2bd-b14da398e043" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.846133 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pbdg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-slw7s_openstack-operators(6764d3f3-5e9f-4635-973e-81324dbc8e34): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.848389 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" podUID="6764d3f3-5e9f-4635-973e-81324dbc8e34" Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.866154 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" event={"ID":"0a170b4f-607d-4c7c-bd0c-ee6c29523b44","Type":"ContainerStarted","Data":"a86b74ef2e726a2453a5336f67089f6236bae472a2e5292332bbe88aea3586c9"} Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.873394 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" event={"ID":"93278ccd-52fe-4848-9a46-3f47369d47ab","Type":"ContainerStarted","Data":"6745e62320b4c7aca0252eb3a66554bad32b95d55747ae9157a937d763c44158"} Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.874976 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" event={"ID":"d4bd0818-617e-418a-b7c7-f70ba7ebc3d8","Type":"ContainerStarted","Data":"c1709e0c16cbcf0c62db44b4f39a7fbb858a964a5421c944bdf39338198333fc"} Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.876796 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" event={"ID":"e2e1b5f4-7ed2-4ab1-871b-1974a7559252","Type":"ContainerStarted","Data":"f01f1a3187f6e8d350cf0043f487d17d6c2abdd4325b2fbbefcb320657dfa386"} Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.878385 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" event={"ID":"681f334b-d0ac-43dc-babb-92d9cb7c0440","Type":"ContainerStarted","Data":"f5a9beb81f1dbc024a1ada7b5ff85611d38177ccd8ef673390c3ceecf75de984"} Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.901280 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:21 crc kubenswrapper[4808]: I0217 16:11:21.901425 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.901590 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.901647 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:23.901628049 +0000 UTC m=+1047.417987122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "metrics-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.901694 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:11:21 crc kubenswrapper[4808]: E0217 16:11:21.901713 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:23.901707221 +0000 UTC m=+1047.418066294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "webhook-server-cert" not found Feb 17 16:11:22 crc kubenswrapper[4808]: I0217 16:11:22.899975 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" event={"ID":"6764d3f3-5e9f-4635-973e-81324dbc8e34","Type":"ContainerStarted","Data":"d5c9ba4fcfc85878cebb360bf7a5018e1fa34a5013319692f4f5b1bb9272ca70"} Feb 17 16:11:22 crc kubenswrapper[4808]: E0217 16:11:22.906592 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" podUID="6764d3f3-5e9f-4635-973e-81324dbc8e34" Feb 17 16:11:22 crc kubenswrapper[4808]: I0217 16:11:22.914821 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" event={"ID":"a83d92da-4f15-4e33-ab57-ae7bc9e0da5e","Type":"ContainerStarted","Data":"49adc6d9733c663faf6a80c40c9cac3b3035952c4d651eeed023ebcf7b3b375d"} Feb 17 16:11:22 crc kubenswrapper[4808]: I0217 16:11:22.916908 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" event={"ID":"a40e52a1-9867-413a-81fb-324789e0a009","Type":"ContainerStarted","Data":"e52f1d2721acbcf9be3698fd178070c78dd8d3ce31c40d0b96b77527b5829735"} Feb 17 16:11:22 crc kubenswrapper[4808]: I0217 16:11:22.932346 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" event={"ID":"cde66c49-b3c4-4f4f-b614-c4343d1c3732","Type":"ContainerStarted","Data":"c5de695f6323ef5910f014183e4a1a0b742925e652f4caab358ecdbdf09a8535"} Feb 17 16:11:22 crc kubenswrapper[4808]: E0217 16:11:22.939864 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" podUID="cde66c49-b3c4-4f4f-b614-c4343d1c3732" Feb 17 16:11:22 crc kubenswrapper[4808]: I0217 16:11:22.949936 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" event={"ID":"b42c0b9b-cca5-4ecb-908e-508fbf932dfe","Type":"ContainerStarted","Data":"55d9fddddb15a3287572abfeb236e5e5ca9dd50652f15828c8a5dd795a98661b"} Feb 17 16:11:22 crc kubenswrapper[4808]: I0217 16:11:22.972794 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" event={"ID":"ace1fd54-7ff8-45b9-a77b-c3908044365e","Type":"ContainerStarted","Data":"0b30b028fcdfafb2488a0117f705591643a17be31a80764ae26c1e10fc159068"} Feb 17 16:11:22 crc kubenswrapper[4808]: E0217 16:11:22.986212 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" podUID="ace1fd54-7ff8-45b9-a77b-c3908044365e" Feb 17 16:11:22 crc kubenswrapper[4808]: I0217 16:11:22.991728 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" event={"ID":"8d4c91a6-8441-45a6-bb6a-7655ba464fb9","Type":"ContainerStarted","Data":"20d7dc1b6b560cc6277ecd84ae978298226c2eb38537305baaac7ead51dadbb4"} Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.004188 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" event={"ID":"a2547c9d-80d6-491d-8517-26327e35a1f4","Type":"ContainerStarted","Data":"729d7f53e063d4eb3ea3fa2107b7194c4029562dc324a5db783dcdf7ed46c68b"} Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.009694 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" event={"ID":"a6f8ca14-e1db-4dcc-a64d-7bf137105e80","Type":"ContainerStarted","Data":"37c6bc603878988cbbea0472e736bda5560b5e05d1eb11742b0a852e536e7944"} Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.015959 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" podUID="a6f8ca14-e1db-4dcc-a64d-7bf137105e80" Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.023085 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" event={"ID":"74dda28c-8860-440c-b97c-b16bab985ff0","Type":"ContainerStarted","Data":"41aa92b167d652175fc28fd82809bf6736d7d5a2812b6fc83b7a9a95cdae24f8"} Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.026395 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" podUID="74dda28c-8860-440c-b97c-b16bab985ff0" Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.047396 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" event={"ID":"bdd19f1d-df45-4dda-a2bd-b14da398e043","Type":"ContainerStarted","Data":"0d7ab3bcc5a037e32d12a0b4f5588885275a790adb5c8bfec1ea47a493fabeb1"} Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.067760 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" event={"ID":"77df5d1f-daff-4508-861a-335ab87f2366","Type":"ContainerStarted","Data":"b0063d6f14f391db832e148bba81c497614912df0fadf51543ec2f3dd9863c9a"} Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.067904 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" podUID="bdd19f1d-df45-4dda-a2bd-b14da398e043" Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.233364 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.233923 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.233969 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert podName:6508a74d-2dba-4d1b-910c-95c9463c15a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:27.233956783 +0000 UTC m=+1050.750315856 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert") pod "infra-operator-controller-manager-79d975b745-n6qxn" (UID: "6508a74d-2dba-4d1b-910c-95c9463c15a4") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.436321 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.436691 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.436773 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert podName:2ec18a16-766f-4a0c-a393-0ca7a999011e nodeName:}" failed. No retries permitted until 2026-02-17 16:11:27.436727482 +0000 UTC m=+1050.953086555 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" (UID: "2ec18a16-766f-4a0c-a393-0ca7a999011e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.943368 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:23 crc kubenswrapper[4808]: I0217 16:11:23.943459 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.943642 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.943692 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:27.943677908 +0000 UTC m=+1051.460036981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "webhook-server-cert" not found Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.944222 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:11:23 crc kubenswrapper[4808]: E0217 16:11:23.944252 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:27.944241794 +0000 UTC m=+1051.460600867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "metrics-server-cert" not found Feb 17 16:11:24 crc kubenswrapper[4808]: E0217 16:11:24.081756 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" podUID="ace1fd54-7ff8-45b9-a77b-c3908044365e" Feb 17 16:11:24 crc kubenswrapper[4808]: E0217 16:11:24.082093 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" podUID="74dda28c-8860-440c-b97c-b16bab985ff0" Feb 17 16:11:24 crc kubenswrapper[4808]: E0217 16:11:24.082152 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" podUID="a6f8ca14-e1db-4dcc-a64d-7bf137105e80" Feb 17 16:11:24 crc kubenswrapper[4808]: E0217 16:11:24.082190 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" podUID="bdd19f1d-df45-4dda-a2bd-b14da398e043" Feb 17 16:11:24 crc kubenswrapper[4808]: E0217 16:11:24.082263 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" podUID="cde66c49-b3c4-4f4f-b614-c4343d1c3732" Feb 17 16:11:24 crc kubenswrapper[4808]: E0217 16:11:24.082296 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" podUID="6764d3f3-5e9f-4635-973e-81324dbc8e34" Feb 17 16:11:27 crc kubenswrapper[4808]: I0217 16:11:27.240730 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.240954 4808 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.241401 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert podName:6508a74d-2dba-4d1b-910c-95c9463c15a4 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:35.241379766 +0000 UTC m=+1058.757738839 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert") pod "infra-operator-controller-manager-79d975b745-n6qxn" (UID: "6508a74d-2dba-4d1b-910c-95c9463c15a4") : secret "infra-operator-webhook-server-cert" not found Feb 17 16:11:27 crc kubenswrapper[4808]: I0217 16:11:27.444004 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.444185 4808 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.444525 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert podName:2ec18a16-766f-4a0c-a393-0ca7a999011e nodeName:}" failed. No retries permitted until 2026-02-17 16:11:35.444505955 +0000 UTC m=+1058.960865028 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" (UID: "2ec18a16-766f-4a0c-a393-0ca7a999011e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 17 16:11:27 crc kubenswrapper[4808]: I0217 16:11:27.951443 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:27 crc kubenswrapper[4808]: I0217 16:11:27.951524 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.952294 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.952344 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:35.952329384 +0000 UTC m=+1059.468688457 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "webhook-server-cert" not found Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.954062 4808 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 17 16:11:27 crc kubenswrapper[4808]: E0217 16:11:27.954109 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:35.954097562 +0000 UTC m=+1059.470456635 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "metrics-server-cert" not found Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.269199 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.277630 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6508a74d-2dba-4d1b-910c-95c9463c15a4-cert\") pod \"infra-operator-controller-manager-79d975b745-n6qxn\" (UID: \"6508a74d-2dba-4d1b-910c-95c9463c15a4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.385839 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.474019 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.477586 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ec18a16-766f-4a0c-a393-0ca7a999011e-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws\" (UID: \"2ec18a16-766f-4a0c-a393-0ca7a999011e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.481273 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:35 crc kubenswrapper[4808]: E0217 16:11:35.747209 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 17 16:11:35 crc kubenswrapper[4808]: E0217 16:11:35.747404 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s6lvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-kg6xx_openstack-operators(8d4c91a6-8441-45a6-bb6a-7655ba464fb9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:11:35 crc kubenswrapper[4808]: E0217 16:11:35.748738 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" podUID="8d4c91a6-8441-45a6-bb6a-7655ba464fb9" Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.980636 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.981165 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:35 crc kubenswrapper[4808]: E0217 16:11:35.981338 4808 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 17 16:11:35 crc kubenswrapper[4808]: E0217 16:11:35.981400 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs podName:5e47b192-26de-4639-afe8-ec7b5fcc10c8 nodeName:}" failed. No retries permitted until 2026-02-17 16:11:51.981384324 +0000 UTC m=+1075.497743397 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs") pod "openstack-operator-controller-manager-546d579865-b8s4r" (UID: "5e47b192-26de-4639-afe8-ec7b5fcc10c8") : secret "webhook-server-cert" not found Feb 17 16:11:35 crc kubenswrapper[4808]: I0217 16:11:35.984242 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-metrics-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:36 crc kubenswrapper[4808]: E0217 16:11:36.233027 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" podUID="8d4c91a6-8441-45a6-bb6a-7655ba464fb9" Feb 17 16:11:36 crc kubenswrapper[4808]: E0217 16:11:36.314144 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 17 16:11:36 crc kubenswrapper[4808]: E0217 16:11:36.315291 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v54s4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-xcs6n_openstack-operators(a83d92da-4f15-4e33-ab57-ae7bc9e0da5e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:11:36 crc kubenswrapper[4808]: E0217 16:11:36.317072 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" podUID="a83d92da-4f15-4e33-ab57-ae7bc9e0da5e" Feb 17 16:11:36 crc kubenswrapper[4808]: E0217 16:11:36.975966 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 17 16:11:36 crc kubenswrapper[4808]: E0217 16:11:36.976188 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4jgrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-8xfc6_openstack-operators(96baec58-63b9-49cd-9cf4-32639e58d4ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:11:36 crc kubenswrapper[4808]: E0217 16:11:36.977407 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" podUID="96baec58-63b9-49cd-9cf4-32639e58d4ac" Feb 17 16:11:37 crc kubenswrapper[4808]: E0217 16:11:37.240746 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" podUID="96baec58-63b9-49cd-9cf4-32639e58d4ac" Feb 17 16:11:37 crc kubenswrapper[4808]: E0217 16:11:37.242516 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" podUID="a83d92da-4f15-4e33-ab57-ae7bc9e0da5e" Feb 17 16:11:37 crc kubenswrapper[4808]: I0217 16:11:37.960348 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn"] Feb 17 16:11:37 crc kubenswrapper[4808]: I0217 16:11:37.986809 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws"] Feb 17 16:11:38 crc kubenswrapper[4808]: W0217 16:11:38.319909 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6508a74d_2dba_4d1b_910c_95c9463c15a4.slice/crio-237b2f365540b7c24cef63cda10c1e1a62ee840be7609d75aa28a2647feb1d55 WatchSource:0}: Error finding container 237b2f365540b7c24cef63cda10c1e1a62ee840be7609d75aa28a2647feb1d55: Status 404 returned error can't find the container with id 237b2f365540b7c24cef63cda10c1e1a62ee840be7609d75aa28a2647feb1d55 Feb 17 16:11:38 crc kubenswrapper[4808]: W0217 16:11:38.321722 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ec18a16_766f_4a0c_a393_0ca7a999011e.slice/crio-cad07ffd94f36d188bfc3c799761ba83622b51e50da7face66ceac5e9109af79 WatchSource:0}: Error finding container cad07ffd94f36d188bfc3c799761ba83622b51e50da7face66ceac5e9109af79: Status 404 returned error can't find the container with id cad07ffd94f36d188bfc3c799761ba83622b51e50da7face66ceac5e9109af79 Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.266143 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" event={"ID":"e2e1b5f4-7ed2-4ab1-871b-1974a7559252","Type":"ContainerStarted","Data":"10ba36d4f9cf03b45783fab1951237e478555c6bef77aa74f843c9d4918aa3c5"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.266519 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.270920 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" event={"ID":"0a170b4f-607d-4c7c-bd0c-ee6c29523b44","Type":"ContainerStarted","Data":"63e8a86166c0d60b5adc435b1753a337c61a19af3e97088c7b2e8f9cfbb53239"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.271188 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.279802 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" event={"ID":"74dda28c-8860-440c-b97c-b16bab985ff0","Type":"ContainerStarted","Data":"6497a201ad8e9130bd3a0568def9743e4a96faf5dbd76f138408a2c0aec4a7e0"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.280433 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.287737 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" event={"ID":"93278ccd-52fe-4848-9a46-3f47369d47ab","Type":"ContainerStarted","Data":"d7f91e5327480e544341356ffa79a8ee03c2d2edf8fdaa07bb58a258ca2dcc5c"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.287909 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.299894 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" podStartSLOduration=4.115590268 podStartE2EDuration="20.299875464s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:20.776558888 +0000 UTC m=+1044.292917961" lastFinishedPulling="2026-02-17 16:11:36.960844084 +0000 UTC m=+1060.477203157" observedRunningTime="2026-02-17 16:11:39.299774881 +0000 UTC m=+1062.816133954" watchObservedRunningTime="2026-02-17 16:11:39.299875464 +0000 UTC m=+1062.816234537" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.303919 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" event={"ID":"b42c0b9b-cca5-4ecb-908e-508fbf932dfe","Type":"ContainerStarted","Data":"7bf2580f0e19d355458c489b39109492283ef204a75a011033182321aedaec9b"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.304654 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.345940 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" podStartSLOduration=5.041002382 podStartE2EDuration="20.34592292s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.640431307 +0000 UTC m=+1045.156790380" lastFinishedPulling="2026-02-17 16:11:36.945351845 +0000 UTC m=+1060.461710918" observedRunningTime="2026-02-17 16:11:39.329543388 +0000 UTC m=+1062.845902461" watchObservedRunningTime="2026-02-17 16:11:39.34592292 +0000 UTC m=+1062.862281993" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.368906 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" event={"ID":"77df5d1f-daff-4508-861a-335ab87f2366","Type":"ContainerStarted","Data":"c7c580f02fa62c4b557abb26c0105494d4fd28d5a667d1edb351e3e50d268919"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.369558 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.370301 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" podStartSLOduration=5.117258348 podStartE2EDuration="20.3702877s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.69704266 +0000 UTC m=+1045.213401733" lastFinishedPulling="2026-02-17 16:11:36.950071992 +0000 UTC m=+1060.466431085" observedRunningTime="2026-02-17 16:11:39.366982571 +0000 UTC m=+1062.883341654" watchObservedRunningTime="2026-02-17 16:11:39.3702877 +0000 UTC m=+1062.886646773" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.390628 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" event={"ID":"a2547c9d-80d6-491d-8517-26327e35a1f4","Type":"ContainerStarted","Data":"59f1bc51c76506d0352a289169377333de9bc21398f8f86076b19fd57d8cf149"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.391223 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.401973 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" event={"ID":"b622bb16-c5b4-45ea-b493-e681d36d49ac","Type":"ContainerStarted","Data":"d102a1692b29c78ec5949caf797a6f631fc63ce4f4fdca2a995d1ab4319dce2b"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.402606 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.410629 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" event={"ID":"a40e52a1-9867-413a-81fb-324789e0a009","Type":"ContainerStarted","Data":"b2e8f40bc85c48f93a9ebc1a04f882ac64bc96ec2e858900d68c3eb95e8624f3"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.411547 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.412056 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" podStartSLOduration=3.8760442729999998 podStartE2EDuration="20.412040691s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.837555005 +0000 UTC m=+1045.353914078" lastFinishedPulling="2026-02-17 16:11:38.373551423 +0000 UTC m=+1061.889910496" observedRunningTime="2026-02-17 16:11:39.409134162 +0000 UTC m=+1062.925493235" watchObservedRunningTime="2026-02-17 16:11:39.412040691 +0000 UTC m=+1062.928399764" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.424972 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" event={"ID":"d4bd0818-617e-418a-b7c7-f70ba7ebc3d8","Type":"ContainerStarted","Data":"9c3d9151cb320a5badba0841bfb936a18ca767b80699a4b018bac68278862dc8"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.425055 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.447220 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" event={"ID":"ace1fd54-7ff8-45b9-a77b-c3908044365e","Type":"ContainerStarted","Data":"f58304db8542cdf4ba5a2ead3868c83fac7d59192ab35082ded69eab18dd4582"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.448065 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.457089 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" event={"ID":"681f334b-d0ac-43dc-babb-92d9cb7c0440","Type":"ContainerStarted","Data":"02f2062e15e1d75b80c1caf8051d0d941859d1acb5970a57b47ff4e2471daf18"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.458167 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.459737 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" podStartSLOduration=5.257152746 podStartE2EDuration="20.459714972s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.743234961 +0000 UTC m=+1045.259594034" lastFinishedPulling="2026-02-17 16:11:36.945797187 +0000 UTC m=+1060.462156260" observedRunningTime="2026-02-17 16:11:39.443968635 +0000 UTC m=+1062.960327708" watchObservedRunningTime="2026-02-17 16:11:39.459714972 +0000 UTC m=+1062.976074045" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.464896 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" event={"ID":"3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb","Type":"ContainerStarted","Data":"471671a3ff538430bb2cd71466b023fff9c7f5639bd40d52c4753c7643e06ccc"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.465293 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.466958 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" event={"ID":"6508a74d-2dba-4d1b-910c-95c9463c15a4","Type":"ContainerStarted","Data":"237b2f365540b7c24cef63cda10c1e1a62ee840be7609d75aa28a2647feb1d55"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.481839 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" podStartSLOduration=4.702851988 podStartE2EDuration="20.48181618s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.164680176 +0000 UTC m=+1044.681039249" lastFinishedPulling="2026-02-17 16:11:36.943644368 +0000 UTC m=+1060.460003441" observedRunningTime="2026-02-17 16:11:39.480454953 +0000 UTC m=+1062.996814056" watchObservedRunningTime="2026-02-17 16:11:39.48181618 +0000 UTC m=+1062.998175253" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.485842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" event={"ID":"2ec18a16-766f-4a0c-a393-0ca7a999011e","Type":"ContainerStarted","Data":"cad07ffd94f36d188bfc3c799761ba83622b51e50da7face66ceac5e9109af79"} Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.537783 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" podStartSLOduration=5.323530643 podStartE2EDuration="20.537767835s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.739257542 +0000 UTC m=+1045.255616615" lastFinishedPulling="2026-02-17 16:11:36.953494744 +0000 UTC m=+1060.469853807" observedRunningTime="2026-02-17 16:11:39.535028701 +0000 UTC m=+1063.051387784" watchObservedRunningTime="2026-02-17 16:11:39.537767835 +0000 UTC m=+1063.054126908" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.554566 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" podStartSLOduration=5.358007337 podStartE2EDuration="20.55454928s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.755644906 +0000 UTC m=+1045.272003979" lastFinishedPulling="2026-02-17 16:11:36.952186829 +0000 UTC m=+1060.468545922" observedRunningTime="2026-02-17 16:11:39.550666635 +0000 UTC m=+1063.067025708" watchObservedRunningTime="2026-02-17 16:11:39.55454928 +0000 UTC m=+1063.070908343" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.591670 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" podStartSLOduration=5.4840432 podStartE2EDuration="20.591652044s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.837207015 +0000 UTC m=+1045.353566088" lastFinishedPulling="2026-02-17 16:11:36.944815859 +0000 UTC m=+1060.461174932" observedRunningTime="2026-02-17 16:11:39.58891817 +0000 UTC m=+1063.105277243" watchObservedRunningTime="2026-02-17 16:11:39.591652044 +0000 UTC m=+1063.108011117" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.613148 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" podStartSLOduration=5.310232913 podStartE2EDuration="20.613132136s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.640708144 +0000 UTC m=+1045.157067217" lastFinishedPulling="2026-02-17 16:11:36.943607357 +0000 UTC m=+1060.459966440" observedRunningTime="2026-02-17 16:11:39.610919916 +0000 UTC m=+1063.127278989" watchObservedRunningTime="2026-02-17 16:11:39.613132136 +0000 UTC m=+1063.129491209" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.647022 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" podStartSLOduration=4.061484973 podStartE2EDuration="20.647004783s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.837531374 +0000 UTC m=+1045.353890447" lastFinishedPulling="2026-02-17 16:11:38.423051184 +0000 UTC m=+1061.939410257" observedRunningTime="2026-02-17 16:11:39.638446101 +0000 UTC m=+1063.154805174" watchObservedRunningTime="2026-02-17 16:11:39.647004783 +0000 UTC m=+1063.163363856" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.673477 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" podStartSLOduration=4.934671433 podStartE2EDuration="20.673453028s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.206998481 +0000 UTC m=+1044.723357554" lastFinishedPulling="2026-02-17 16:11:36.945780076 +0000 UTC m=+1060.462139149" observedRunningTime="2026-02-17 16:11:39.66390647 +0000 UTC m=+1063.180265563" watchObservedRunningTime="2026-02-17 16:11:39.673453028 +0000 UTC m=+1063.189812101" Feb 17 16:11:39 crc kubenswrapper[4808]: I0217 16:11:39.701260 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" podStartSLOduration=4.943286648 podStartE2EDuration="20.701239551s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.193541368 +0000 UTC m=+1044.709900441" lastFinishedPulling="2026-02-17 16:11:36.951494251 +0000 UTC m=+1060.467853344" observedRunningTime="2026-02-17 16:11:39.694855468 +0000 UTC m=+1063.211214571" watchObservedRunningTime="2026-02-17 16:11:39.701239551 +0000 UTC m=+1063.217598634" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.544285 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" event={"ID":"a6f8ca14-e1db-4dcc-a64d-7bf137105e80","Type":"ContainerStarted","Data":"88a8fdb8db4991b23917c8312b4175332240d40a9f79fc4130257e29403cf5d7"} Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.544892 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.545967 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" event={"ID":"6508a74d-2dba-4d1b-910c-95c9463c15a4","Type":"ContainerStarted","Data":"558762a59baaab1168f86ef43b4d76016a7250671391a5b40b5d0979d10b358a"} Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.546083 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.547684 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" event={"ID":"2ec18a16-766f-4a0c-a393-0ca7a999011e","Type":"ContainerStarted","Data":"c91652db3c8c897772638b6654280bc4621e873e9701eda8ff3cf54fd4856b76"} Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.547817 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.550107 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" event={"ID":"cde66c49-b3c4-4f4f-b614-c4343d1c3732","Type":"ContainerStarted","Data":"b4044d49f55d6d44041e442f6cbe164ea3fd523bc3d8574d53f27573385913c7"} Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.550361 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.552864 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" event={"ID":"bdd19f1d-df45-4dda-a2bd-b14da398e043","Type":"ContainerStarted","Data":"33d6a07fb5251112637b4c21e182ca6b6a5429ea65ee868cdcd15af9eebf7d94"} Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.553065 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.554873 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" event={"ID":"6764d3f3-5e9f-4635-973e-81324dbc8e34","Type":"ContainerStarted","Data":"403eef907dbbbbbb81eaacc2ef278118280b09ee4a8dec83f69983fcad525b75"} Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.555025 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.569745 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" podStartSLOduration=3.591555949 podStartE2EDuration="27.569727358s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.837485563 +0000 UTC m=+1045.353844636" lastFinishedPulling="2026-02-17 16:11:45.815656962 +0000 UTC m=+1069.332016045" observedRunningTime="2026-02-17 16:11:46.56349604 +0000 UTC m=+1070.079855123" watchObservedRunningTime="2026-02-17 16:11:46.569727358 +0000 UTC m=+1070.086086431" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.581984 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" podStartSLOduration=3.5838316900000002 podStartE2EDuration="27.581966879s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.837661398 +0000 UTC m=+1045.354020471" lastFinishedPulling="2026-02-17 16:11:45.835796577 +0000 UTC m=+1069.352155660" observedRunningTime="2026-02-17 16:11:46.581849096 +0000 UTC m=+1070.098208179" watchObservedRunningTime="2026-02-17 16:11:46.581966879 +0000 UTC m=+1070.098325952" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.599754 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" podStartSLOduration=20.152870903 podStartE2EDuration="27.599735141s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:38.367603812 +0000 UTC m=+1061.883962885" lastFinishedPulling="2026-02-17 16:11:45.81446805 +0000 UTC m=+1069.330827123" observedRunningTime="2026-02-17 16:11:46.597735717 +0000 UTC m=+1070.114094790" watchObservedRunningTime="2026-02-17 16:11:46.599735141 +0000 UTC m=+1070.116094214" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.633325 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" podStartSLOduration=3.679400278 podStartE2EDuration="27.63330954s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.846024734 +0000 UTC m=+1045.362383807" lastFinishedPulling="2026-02-17 16:11:45.799933996 +0000 UTC m=+1069.316293069" observedRunningTime="2026-02-17 16:11:46.629536997 +0000 UTC m=+1070.145896070" watchObservedRunningTime="2026-02-17 16:11:46.63330954 +0000 UTC m=+1070.149668613" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.643231 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" podStartSLOduration=3.672908142 podStartE2EDuration="27.643213148s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.840909565 +0000 UTC m=+1045.357268628" lastFinishedPulling="2026-02-17 16:11:45.811214561 +0000 UTC m=+1069.327573634" observedRunningTime="2026-02-17 16:11:46.641967674 +0000 UTC m=+1070.158326747" watchObservedRunningTime="2026-02-17 16:11:46.643213148 +0000 UTC m=+1070.159572221" Feb 17 16:11:46 crc kubenswrapper[4808]: I0217 16:11:46.667691 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" podStartSLOduration=20.201706676 podStartE2EDuration="27.66767547s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:38.344917038 +0000 UTC m=+1061.861276111" lastFinishedPulling="2026-02-17 16:11:45.810885812 +0000 UTC m=+1069.327244905" observedRunningTime="2026-02-17 16:11:46.663497488 +0000 UTC m=+1070.179856581" watchObservedRunningTime="2026-02-17 16:11:46.66767547 +0000 UTC m=+1070.184034543" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.393568 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cjh7p" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.394953 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-4cv77" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.477301 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-gl97b" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.514386 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-b7hkk" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.582383 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" event={"ID":"96baec58-63b9-49cd-9cf4-32639e58d4ac","Type":"ContainerStarted","Data":"9e478c9f9a0d25bbeae5b246ed737a0687f185285699245fcb63975c99556b60"} Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.582991 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.600031 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" podStartSLOduration=3.164275062 podStartE2EDuration="30.600001315s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.14042901 +0000 UTC m=+1044.656788083" lastFinishedPulling="2026-02-17 16:11:48.576155263 +0000 UTC m=+1072.092514336" observedRunningTime="2026-02-17 16:11:49.599249155 +0000 UTC m=+1073.115608238" watchObservedRunningTime="2026-02-17 16:11:49.600001315 +0000 UTC m=+1073.116360398" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.709420 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv924" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.735792 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-plpr2" Feb 17 16:11:49 crc kubenswrapper[4808]: I0217 16:11:49.964266 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-thpj7" Feb 17 16:11:50 crc kubenswrapper[4808]: I0217 16:11:50.046974 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tkhr5" Feb 17 16:11:50 crc kubenswrapper[4808]: I0217 16:11:50.108016 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-vgbmj" Feb 17 16:11:50 crc kubenswrapper[4808]: I0217 16:11:50.143884 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xp9sf" Feb 17 16:11:50 crc kubenswrapper[4808]: I0217 16:11:50.237698 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-z4vp8" Feb 17 16:11:50 crc kubenswrapper[4808]: I0217 16:11:50.243522 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-5mm2j" Feb 17 16:11:50 crc kubenswrapper[4808]: I0217 16:11:50.338718 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-zxqhb" Feb 17 16:11:51 crc kubenswrapper[4808]: I0217 16:11:51.591831 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:11:51 crc kubenswrapper[4808]: I0217 16:11:51.593443 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:11:52 crc kubenswrapper[4808]: I0217 16:11:52.047987 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:52 crc kubenswrapper[4808]: I0217 16:11:52.058906 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5e47b192-26de-4639-afe8-ec7b5fcc10c8-webhook-certs\") pod \"openstack-operator-controller-manager-546d579865-b8s4r\" (UID: \"5e47b192-26de-4639-afe8-ec7b5fcc10c8\") " pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:52 crc kubenswrapper[4808]: I0217 16:11:52.252105 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:52 crc kubenswrapper[4808]: I0217 16:11:52.538161 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r"] Feb 17 16:11:52 crc kubenswrapper[4808]: W0217 16:11:52.545838 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e47b192_26de_4639_afe8_ec7b5fcc10c8.slice/crio-880af836abc3db7eb8cdedcc5e43229be289a5c6d06291b732094b8049fcdadd WatchSource:0}: Error finding container 880af836abc3db7eb8cdedcc5e43229be289a5c6d06291b732094b8049fcdadd: Status 404 returned error can't find the container with id 880af836abc3db7eb8cdedcc5e43229be289a5c6d06291b732094b8049fcdadd Feb 17 16:11:52 crc kubenswrapper[4808]: I0217 16:11:52.605248 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" event={"ID":"5e47b192-26de-4639-afe8-ec7b5fcc10c8","Type":"ContainerStarted","Data":"880af836abc3db7eb8cdedcc5e43229be289a5c6d06291b732094b8049fcdadd"} Feb 17 16:11:55 crc kubenswrapper[4808]: I0217 16:11:55.395801 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-n6qxn" Feb 17 16:11:55 crc kubenswrapper[4808]: I0217 16:11:55.490982 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws" Feb 17 16:11:56 crc kubenswrapper[4808]: I0217 16:11:56.635117 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" event={"ID":"5e47b192-26de-4639-afe8-ec7b5fcc10c8","Type":"ContainerStarted","Data":"6d4d77a435b1716349fcb18d5270ad1cbe553927d1e8453a2abbc8dc3f218c2b"} Feb 17 16:11:56 crc kubenswrapper[4808]: I0217 16:11:56.635533 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:11:56 crc kubenswrapper[4808]: I0217 16:11:56.637310 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" event={"ID":"8d4c91a6-8441-45a6-bb6a-7655ba464fb9","Type":"ContainerStarted","Data":"cd5188157b24f9c4992d2b83ab17e8dcb213752403da8c5826e5978a986199b5"} Feb 17 16:11:56 crc kubenswrapper[4808]: I0217 16:11:56.637477 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" Feb 17 16:11:56 crc kubenswrapper[4808]: I0217 16:11:56.665506 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" podStartSLOduration=37.665487245 podStartE2EDuration="37.665487245s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:11:56.660542851 +0000 UTC m=+1080.176901924" watchObservedRunningTime="2026-02-17 16:11:56.665487245 +0000 UTC m=+1080.181846318" Feb 17 16:11:56 crc kubenswrapper[4808]: I0217 16:11:56.682401 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" podStartSLOduration=3.061028456 podStartE2EDuration="37.682385903s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.832232181 +0000 UTC m=+1045.348591254" lastFinishedPulling="2026-02-17 16:11:56.453589618 +0000 UTC m=+1079.969948701" observedRunningTime="2026-02-17 16:11:56.678741705 +0000 UTC m=+1080.195100778" watchObservedRunningTime="2026-02-17 16:11:56.682385903 +0000 UTC m=+1080.198744976" Feb 17 16:11:57 crc kubenswrapper[4808]: I0217 16:11:57.649553 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" event={"ID":"a83d92da-4f15-4e33-ab57-ae7bc9e0da5e","Type":"ContainerStarted","Data":"dcc6f8302433f854a84b7e778dd07fface235aeb9f74a175a0c5960110747d44"} Feb 17 16:11:57 crc kubenswrapper[4808]: I0217 16:11:57.673990 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xcs6n" podStartSLOduration=3.958778523 podStartE2EDuration="38.673966991s" podCreationTimestamp="2026-02-17 16:11:19 +0000 UTC" firstStartedPulling="2026-02-17 16:11:21.739660834 +0000 UTC m=+1045.256019907" lastFinishedPulling="2026-02-17 16:11:56.454849302 +0000 UTC m=+1079.971208375" observedRunningTime="2026-02-17 16:11:57.667555877 +0000 UTC m=+1081.183914950" watchObservedRunningTime="2026-02-17 16:11:57.673966991 +0000 UTC m=+1081.190326064" Feb 17 16:11:59 crc kubenswrapper[4808]: I0217 16:11:59.690706 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8xfc6" Feb 17 16:12:00 crc kubenswrapper[4808]: I0217 16:12:00.069478 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-t9k25" Feb 17 16:12:00 crc kubenswrapper[4808]: I0217 16:12:00.211196 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-slw7s" Feb 17 16:12:00 crc kubenswrapper[4808]: I0217 16:12:00.380883 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-66fcc5ff49-dnzp5" Feb 17 16:12:00 crc kubenswrapper[4808]: I0217 16:12:00.416767 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-5qkk2" Feb 17 16:12:02 crc kubenswrapper[4808]: I0217 16:12:02.266174 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-546d579865-b8s4r" Feb 17 16:12:10 crc kubenswrapper[4808]: I0217 16:12:10.124963 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-kg6xx" Feb 17 16:12:21 crc kubenswrapper[4808]: I0217 16:12:21.595631 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:12:21 crc kubenswrapper[4808]: I0217 16:12:21.596334 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.474970 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8jstw"] Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.477931 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.485141 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.485300 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.485616 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.485764 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-r4pxs" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.494190 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8jstw"] Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.532047 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g8xlz"] Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.533235 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.538936 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.562148 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g8xlz"] Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.577860 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.577902 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kwnk\" (UniqueName: \"kubernetes.io/projected/38d70adc-e16e-4470-9b59-1c728c29318d-kube-api-access-2kwnk\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.577935 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973eee94-2439-415c-b9b8-2f6f72738ac9-config\") pod \"dnsmasq-dns-675f4bcbfc-8jstw\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.578001 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d88gz\" (UniqueName: \"kubernetes.io/projected/973eee94-2439-415c-b9b8-2f6f72738ac9-kube-api-access-d88gz\") pod \"dnsmasq-dns-675f4bcbfc-8jstw\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.578019 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-config\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.678450 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.678482 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kwnk\" (UniqueName: \"kubernetes.io/projected/38d70adc-e16e-4470-9b59-1c728c29318d-kube-api-access-2kwnk\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.678511 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973eee94-2439-415c-b9b8-2f6f72738ac9-config\") pod \"dnsmasq-dns-675f4bcbfc-8jstw\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.678570 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d88gz\" (UniqueName: \"kubernetes.io/projected/973eee94-2439-415c-b9b8-2f6f72738ac9-kube-api-access-d88gz\") pod \"dnsmasq-dns-675f4bcbfc-8jstw\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.678605 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-config\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.679427 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.679439 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-config\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.681316 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973eee94-2439-415c-b9b8-2f6f72738ac9-config\") pod \"dnsmasq-dns-675f4bcbfc-8jstw\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.696129 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kwnk\" (UniqueName: \"kubernetes.io/projected/38d70adc-e16e-4470-9b59-1c728c29318d-kube-api-access-2kwnk\") pod \"dnsmasq-dns-78dd6ddcc-g8xlz\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.698171 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d88gz\" (UniqueName: \"kubernetes.io/projected/973eee94-2439-415c-b9b8-2f6f72738ac9-kube-api-access-d88gz\") pod \"dnsmasq-dns-675f4bcbfc-8jstw\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.804449 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:27 crc kubenswrapper[4808]: I0217 16:12:27.870332 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:28 crc kubenswrapper[4808]: I0217 16:12:28.152694 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g8xlz"] Feb 17 16:12:28 crc kubenswrapper[4808]: I0217 16:12:28.287591 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8jstw"] Feb 17 16:12:28 crc kubenswrapper[4808]: I0217 16:12:28.950437 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" event={"ID":"973eee94-2439-415c-b9b8-2f6f72738ac9","Type":"ContainerStarted","Data":"8041177f9f605013ae787b3681b3a5558dd54bee858e7ca6318f63453fa6a01c"} Feb 17 16:12:28 crc kubenswrapper[4808]: I0217 16:12:28.952787 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" event={"ID":"38d70adc-e16e-4470-9b59-1c728c29318d","Type":"ContainerStarted","Data":"36e351405a8f30735cdfbd65ebbfe018758adcc5855f9db2bc133ed0f4654c84"} Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.262956 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8jstw"] Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.291097 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8sg8r"] Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.292413 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.304518 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8sg8r"] Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.331898 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-config\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.331971 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdvxp\" (UniqueName: \"kubernetes.io/projected/bac5f26b-ff81-49e2-854f-9cad23a57593-kube-api-access-tdvxp\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.332082 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.434123 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.434191 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-config\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.434237 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdvxp\" (UniqueName: \"kubernetes.io/projected/bac5f26b-ff81-49e2-854f-9cad23a57593-kube-api-access-tdvxp\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.435230 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-dns-svc\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.435365 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-config\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.458741 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdvxp\" (UniqueName: \"kubernetes.io/projected/bac5f26b-ff81-49e2-854f-9cad23a57593-kube-api-access-tdvxp\") pod \"dnsmasq-dns-666b6646f7-8sg8r\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.557761 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g8xlz"] Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.588789 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wrzq"] Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.590436 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.603509 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wrzq"] Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.613186 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.638095 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdqm8\" (UniqueName: \"kubernetes.io/projected/24cc6fe1-da44-4d61-98bf-3088b398903b-kube-api-access-zdqm8\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.638154 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-config\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.638215 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.739489 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdqm8\" (UniqueName: \"kubernetes.io/projected/24cc6fe1-da44-4d61-98bf-3088b398903b-kube-api-access-zdqm8\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.739545 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-config\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.739646 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.740511 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-config\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.740611 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.758970 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdqm8\" (UniqueName: \"kubernetes.io/projected/24cc6fe1-da44-4d61-98bf-3088b398903b-kube-api-access-zdqm8\") pod \"dnsmasq-dns-57d769cc4f-5wrzq\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:30 crc kubenswrapper[4808]: I0217 16:12:30.920996 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.433673 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.437619 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.444629 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.445203 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.445496 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gc9dp" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.445721 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.445753 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.445855 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.446241 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.449372 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450241 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/698c36e9-5f87-4836-8660-aaceac669005-pod-info\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450281 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-server-conf\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450316 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-config-data\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450364 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450389 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450421 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450462 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450524 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450554 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/698c36e9-5f87-4836-8660-aaceac669005-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450610 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.450705 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqv9f\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-kube-api-access-bqv9f\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552122 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/698c36e9-5f87-4836-8660-aaceac669005-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552169 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552199 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqv9f\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-kube-api-access-bqv9f\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552230 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/698c36e9-5f87-4836-8660-aaceac669005-pod-info\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552251 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-server-conf\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552270 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-config-data\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552315 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552336 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552363 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552486 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552509 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.552995 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.553342 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.553488 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.554812 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-config-data\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.555000 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-server-conf\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.556611 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/698c36e9-5f87-4836-8660-aaceac669005-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.557231 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/698c36e9-5f87-4836-8660-aaceac669005-pod-info\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.558617 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.558782 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.559348 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.559376 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6f412b4a2036f29492410677330a9ca63ffe6d8a8c319c56d242ee67a4a97d25/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.570139 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqv9f\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-kube-api-access-bqv9f\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.588831 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.711686 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.713632 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.716842 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.716895 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.717155 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gsb4q" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.717459 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.717668 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.717765 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.717839 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.722890 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.779167 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856231 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856268 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59be2048-a5c9-44c9-a3ef-651002555ff0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856292 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856337 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856539 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856567 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59be2048-a5c9-44c9-a3ef-651002555ff0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856613 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856664 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flvtj\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-kube-api-access-flvtj\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856682 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856815 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.856857 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958388 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flvtj\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-kube-api-access-flvtj\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958444 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958490 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958513 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958558 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958595 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59be2048-a5c9-44c9-a3ef-651002555ff0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958622 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958657 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958681 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958704 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59be2048-a5c9-44c9-a3ef-651002555ff0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.958736 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.960174 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.961254 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.961360 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.961460 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.962154 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.967352 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59be2048-a5c9-44c9-a3ef-651002555ff0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.967818 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59be2048-a5c9-44c9-a3ef-651002555ff0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.973030 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.973066 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/be40d6772f21ead376a83ce27352b0ce535ee01ddc50414a5dc6453b6d9bcfec/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.975699 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flvtj\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-kube-api-access-flvtj\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.980363 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:31 crc kubenswrapper[4808]: I0217 16:12:31.980974 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.016730 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.037373 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.913393 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.915369 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.924459 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.924803 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-s6nf9" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.924989 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.926417 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.933217 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:12:32 crc kubenswrapper[4808]: I0217 16:12:32.938657 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075126 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a020d38c-5e24-4266-96dc-9050e4d82f46-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075205 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-kolla-config\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075229 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-config-data-default\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075253 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxgv\" (UniqueName: \"kubernetes.io/projected/a020d38c-5e24-4266-96dc-9050e4d82f46-kube-api-access-mfxgv\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075273 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-30394718-1223-46d7-bfe7-4d6809d236ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30394718-1223-46d7-bfe7-4d6809d236ff\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075443 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a020d38c-5e24-4266-96dc-9050e4d82f46-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075532 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a020d38c-5e24-4266-96dc-9050e4d82f46-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.075605 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177513 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a020d38c-5e24-4266-96dc-9050e4d82f46-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177589 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177615 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a020d38c-5e24-4266-96dc-9050e4d82f46-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177685 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a020d38c-5e24-4266-96dc-9050e4d82f46-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177737 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-kolla-config\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177759 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-config-data-default\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177786 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfxgv\" (UniqueName: \"kubernetes.io/projected/a020d38c-5e24-4266-96dc-9050e4d82f46-kube-api-access-mfxgv\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.177815 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-30394718-1223-46d7-bfe7-4d6809d236ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30394718-1223-46d7-bfe7-4d6809d236ff\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.178526 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a020d38c-5e24-4266-96dc-9050e4d82f46-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.179021 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-kolla-config\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.179130 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.179896 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a020d38c-5e24-4266-96dc-9050e4d82f46-config-data-default\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.182498 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a020d38c-5e24-4266-96dc-9050e4d82f46-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.185169 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.185249 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-30394718-1223-46d7-bfe7-4d6809d236ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30394718-1223-46d7-bfe7-4d6809d236ff\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3e1c302f0afff268df14962949a9d196999f26ff33f0979bc5549004932fa8ad/globalmount\"" pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.186508 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a020d38c-5e24-4266-96dc-9050e4d82f46-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.205109 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfxgv\" (UniqueName: \"kubernetes.io/projected/a020d38c-5e24-4266-96dc-9050e4d82f46-kube-api-access-mfxgv\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.218145 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-30394718-1223-46d7-bfe7-4d6809d236ff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30394718-1223-46d7-bfe7-4d6809d236ff\") pod \"openstack-galera-0\" (UID: \"a020d38c-5e24-4266-96dc-9050e4d82f46\") " pod="openstack/openstack-galera-0" Feb 17 16:12:33 crc kubenswrapper[4808]: I0217 16:12:33.240674 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.518920 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.520795 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.527754 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.527779 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.529164 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.536317 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.540914 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-n66xj" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709394 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709689 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjb7d\" (UniqueName: \"kubernetes.io/projected/ade81c90-5cdf-45d4-ad2f-52a3514e1596-kube-api-access-pjb7d\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709722 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade81c90-5cdf-45d4-ad2f-52a3514e1596-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709738 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ade81c90-5cdf-45d4-ad2f-52a3514e1596-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709816 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709864 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade81c90-5cdf-45d4-ad2f-52a3514e1596-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709900 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.709937 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811301 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade81c90-5cdf-45d4-ad2f-52a3514e1596-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811357 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ade81c90-5cdf-45d4-ad2f-52a3514e1596-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811449 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811483 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade81c90-5cdf-45d4-ad2f-52a3514e1596-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811520 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811557 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811649 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.811690 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjb7d\" (UniqueName: \"kubernetes.io/projected/ade81c90-5cdf-45d4-ad2f-52a3514e1596-kube-api-access-pjb7d\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.813284 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ade81c90-5cdf-45d4-ad2f-52a3514e1596-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.813752 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.814399 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.815034 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ade81c90-5cdf-45d4-ad2f-52a3514e1596-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.817144 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ade81c90-5cdf-45d4-ad2f-52a3514e1596-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.817294 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade81c90-5cdf-45d4-ad2f-52a3514e1596-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.830586 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.831772 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.835468 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.835986 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.836164 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-n5t75" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.839745 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.839783 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5270466038ae00cf6871391119df2b111d8d15fa0af733fbdb4f1a590701fc8c/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.840742 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjb7d\" (UniqueName: \"kubernetes.io/projected/ade81c90-5cdf-45d4-ad2f-52a3514e1596-kube-api-access-pjb7d\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.850049 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.879010 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6f58a2ff-3a65-40b3-9aef-dace6fc4982b\") pod \"openstack-cell1-galera-0\" (UID: \"ade81c90-5cdf-45d4-ad2f-52a3514e1596\") " pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:34 crc kubenswrapper[4808]: I0217 16:12:34.896424 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.017235 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.017311 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-config-data\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.017345 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqlrz\" (UniqueName: \"kubernetes.io/projected/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-kube-api-access-wqlrz\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.017377 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-kolla-config\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.017405 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.119296 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.119384 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-config-data\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.119418 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqlrz\" (UniqueName: \"kubernetes.io/projected/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-kube-api-access-wqlrz\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.119449 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-kolla-config\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.119468 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.120299 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-kolla-config\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.120396 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-config-data\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.122795 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.122994 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.137670 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqlrz\" (UniqueName: \"kubernetes.io/projected/2ea38754-3b00-4bcb-93d9-28b60dda0e0a-kube-api-access-wqlrz\") pod \"memcached-0\" (UID: \"2ea38754-3b00-4bcb-93d9-28b60dda0e0a\") " pod="openstack/memcached-0" Feb 17 16:12:35 crc kubenswrapper[4808]: I0217 16:12:35.214812 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 17 16:12:36 crc kubenswrapper[4808]: I0217 16:12:36.891590 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:12:36 crc kubenswrapper[4808]: I0217 16:12:36.892910 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:12:36 crc kubenswrapper[4808]: I0217 16:12:36.895229 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-f9csg" Feb 17 16:12:36 crc kubenswrapper[4808]: I0217 16:12:36.919251 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.046329 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrnn8\" (UniqueName: \"kubernetes.io/projected/0a2bf674-1881-41e9-9c0f-93e8f14ac222-kube-api-access-jrnn8\") pod \"kube-state-metrics-0\" (UID: \"0a2bf674-1881-41e9-9c0f-93e8f14ac222\") " pod="openstack/kube-state-metrics-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.147259 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrnn8\" (UniqueName: \"kubernetes.io/projected/0a2bf674-1881-41e9-9c0f-93e8f14ac222-kube-api-access-jrnn8\") pod \"kube-state-metrics-0\" (UID: \"0a2bf674-1881-41e9-9c0f-93e8f14ac222\") " pod="openstack/kube-state-metrics-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.184186 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrnn8\" (UniqueName: \"kubernetes.io/projected/0a2bf674-1881-41e9-9c0f-93e8f14ac222-kube-api-access-jrnn8\") pod \"kube-state-metrics-0\" (UID: \"0a2bf674-1881-41e9-9c0f-93e8f14ac222\") " pod="openstack/kube-state-metrics-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.210713 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.641926 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.645365 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.659021 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-9fp42" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.659671 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.659706 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.659824 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.659920 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.674598 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.759779 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/56f9931d-b010-4282-9068-16b2e4e4b247-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.760101 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s2hk\" (UniqueName: \"kubernetes.io/projected/56f9931d-b010-4282-9068-16b2e4e4b247-kube-api-access-6s2hk\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.760216 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.760400 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56f9931d-b010-4282-9068-16b2e4e4b247-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.760555 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/56f9931d-b010-4282-9068-16b2e4e4b247-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.761228 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.761351 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.862882 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/56f9931d-b010-4282-9068-16b2e4e4b247-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.862934 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s2hk\" (UniqueName: \"kubernetes.io/projected/56f9931d-b010-4282-9068-16b2e4e4b247-kube-api-access-6s2hk\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.862970 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.862990 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56f9931d-b010-4282-9068-16b2e4e4b247-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.863007 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/56f9931d-b010-4282-9068-16b2e4e4b247-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.863037 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.863058 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.864263 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/56f9931d-b010-4282-9068-16b2e4e4b247-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.866213 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.866362 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/56f9931d-b010-4282-9068-16b2e4e4b247-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.866743 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.867496 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56f9931d-b010-4282-9068-16b2e4e4b247-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.872290 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/56f9931d-b010-4282-9068-16b2e4e4b247-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.883676 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s2hk\" (UniqueName: \"kubernetes.io/projected/56f9931d-b010-4282-9068-16b2e4e4b247-kube-api-access-6s2hk\") pod \"alertmanager-metric-storage-0\" (UID: \"56f9931d-b010-4282-9068-16b2e4e4b247\") " pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:37 crc kubenswrapper[4808]: I0217 16:12:37.978200 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.228237 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.233016 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.235717 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.236432 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.236493 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2wbtf" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.236711 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.236895 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.236900 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.239504 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.247694 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.247834 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.370894 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.370954 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371015 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371035 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371059 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh7d7\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-kube-api-access-sh7d7\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371259 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371305 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371477 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371509 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.371543 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.472857 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.472909 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.472942 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.472960 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.472981 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh7d7\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-kube-api-access-sh7d7\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.473011 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.473028 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.473083 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.473101 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.473125 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.474004 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.474153 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.474160 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.477943 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.478603 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.479108 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.479175 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.479198 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f40780962e64d13d6799d8a1c9a177793dc18d1eb26c87512c3b4aff3215b0d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.480160 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.487212 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.500452 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh7d7\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-kube-api-access-sh7d7\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.536851 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:38 crc kubenswrapper[4808]: I0217 16:12:38.560362 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.419701 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pfcvm"] Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.421047 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.423457 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.423537 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6vzxz" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.423704 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.425606 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-wkzp6"] Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.427077 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.439091 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pfcvm"] Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.467852 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wkzp6"] Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.503893 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a76a2ff-ed1a-4279-898c-54e85973f024-ovn-controller-tls-certs\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.503967 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5sdf\" (UniqueName: \"kubernetes.io/projected/8a76a2ff-ed1a-4279-898c-54e85973f024-kube-api-access-h5sdf\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.503996 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-run\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.504065 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a76a2ff-ed1a-4279-898c-54e85973f024-scripts\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.504089 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-run-ovn\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.504139 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a76a2ff-ed1a-4279-898c-54e85973f024-combined-ca-bundle\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.504158 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-log-ovn\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605384 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a76a2ff-ed1a-4279-898c-54e85973f024-ovn-controller-tls-certs\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605451 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5sdf\" (UniqueName: \"kubernetes.io/projected/8a76a2ff-ed1a-4279-898c-54e85973f024-kube-api-access-h5sdf\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605477 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-run\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605504 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-log\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605548 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-run\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605588 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a76a2ff-ed1a-4279-898c-54e85973f024-scripts\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605605 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-etc-ovs\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605628 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-run-ovn\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605644 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-scripts\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605682 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdjtn\" (UniqueName: \"kubernetes.io/projected/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-kube-api-access-bdjtn\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605704 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-lib\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605725 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a76a2ff-ed1a-4279-898c-54e85973f024-combined-ca-bundle\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.605740 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-log-ovn\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.606509 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-run\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.606647 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-run-ovn\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.606774 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8a76a2ff-ed1a-4279-898c-54e85973f024-var-log-ovn\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.608287 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8a76a2ff-ed1a-4279-898c-54e85973f024-scripts\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.613132 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a76a2ff-ed1a-4279-898c-54e85973f024-ovn-controller-tls-certs\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.613855 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a76a2ff-ed1a-4279-898c-54e85973f024-combined-ca-bundle\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.638853 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5sdf\" (UniqueName: \"kubernetes.io/projected/8a76a2ff-ed1a-4279-898c-54e85973f024-kube-api-access-h5sdf\") pod \"ovn-controller-pfcvm\" (UID: \"8a76a2ff-ed1a-4279-898c-54e85973f024\") " pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706522 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-log\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706607 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-run\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706630 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-etc-ovs\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706649 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-scripts\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706685 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdjtn\" (UniqueName: \"kubernetes.io/projected/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-kube-api-access-bdjtn\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706705 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-lib\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706836 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-run\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706919 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-log\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706939 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-var-lib\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.706966 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-etc-ovs\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.708928 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-scripts\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.722535 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdjtn\" (UniqueName: \"kubernetes.io/projected/30b7fc5a-690b-4ac6-b37c-9c1ec074f962-kube-api-access-bdjtn\") pod \"ovn-controller-ovs-wkzp6\" (UID: \"30b7fc5a-690b-4ac6-b37c-9c1ec074f962\") " pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.745130 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pfcvm" Feb 17 16:12:40 crc kubenswrapper[4808]: I0217 16:12:40.756411 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.306554 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.307856 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.310135 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.310317 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.313285 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.313367 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zvwsl" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.313497 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.339908 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421336 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c434a76-4dcf-4c69-aefa-5cda8b120a26-config\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421378 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c434a76-4dcf-4c69-aefa-5cda8b120a26-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421399 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c434a76-4dcf-4c69-aefa-5cda8b120a26-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421446 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421490 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpcqk\" (UniqueName: \"kubernetes.io/projected/8c434a76-4dcf-4c69-aefa-5cda8b120a26-kube-api-access-dpcqk\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421519 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421548 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.421566 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523103 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpcqk\" (UniqueName: \"kubernetes.io/projected/8c434a76-4dcf-4c69-aefa-5cda8b120a26-kube-api-access-dpcqk\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523385 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523420 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523439 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523485 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c434a76-4dcf-4c69-aefa-5cda8b120a26-config\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523501 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c434a76-4dcf-4c69-aefa-5cda8b120a26-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523515 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c434a76-4dcf-4c69-aefa-5cda8b120a26-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.523558 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.524596 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8c434a76-4dcf-4c69-aefa-5cda8b120a26-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.525205 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8c434a76-4dcf-4c69-aefa-5cda8b120a26-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.526349 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c434a76-4dcf-4c69-aefa-5cda8b120a26-config\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.527055 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.528230 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.538444 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.538499 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/da8c1c1c5898d14f5525cc39e1da9a0aa08af59ceda5dda5b3c382b0baabdf5a/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.540402 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c434a76-4dcf-4c69-aefa-5cda8b120a26-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.542047 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpcqk\" (UniqueName: \"kubernetes.io/projected/8c434a76-4dcf-4c69-aefa-5cda8b120a26-kube-api-access-dpcqk\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.576435 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-242c0ec6-a2ba-44b9-be5e-88a23761bae3\") pod \"ovsdbserver-nb-0\" (UID: \"8c434a76-4dcf-4c69-aefa-5cda8b120a26\") " pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:41 crc kubenswrapper[4808]: I0217 16:12:41.624757 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.375969 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.378179 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.382058 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bsn6p" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.382404 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.382612 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.383959 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.385405 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.488000 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/220c5de1-b4bf-454c-b013-17d78d86cca3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.488070 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.488142 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.488184 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfw9v\" (UniqueName: \"kubernetes.io/projected/220c5de1-b4bf-454c-b013-17d78d86cca3-kube-api-access-dfw9v\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.488222 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.489131 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.489191 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220c5de1-b4bf-454c-b013-17d78d86cca3-config\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.489218 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/220c5de1-b4bf-454c-b013-17d78d86cca3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.590812 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/220c5de1-b4bf-454c-b013-17d78d86cca3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.590861 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.590902 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.590925 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfw9v\" (UniqueName: \"kubernetes.io/projected/220c5de1-b4bf-454c-b013-17d78d86cca3-kube-api-access-dfw9v\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.590954 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.591001 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.591035 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220c5de1-b4bf-454c-b013-17d78d86cca3-config\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.591054 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/220c5de1-b4bf-454c-b013-17d78d86cca3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.591744 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/220c5de1-b4bf-454c-b013-17d78d86cca3-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.592363 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220c5de1-b4bf-454c-b013-17d78d86cca3-config\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.592932 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/220c5de1-b4bf-454c-b013-17d78d86cca3-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.597822 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.601572 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.604051 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.604091 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bce1e0661817a74b72ed4a389aa718a5527213e8b53598d1402b5c61339dc163/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.607783 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/220c5de1-b4bf-454c-b013-17d78d86cca3-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.613645 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfw9v\" (UniqueName: \"kubernetes.io/projected/220c5de1-b4bf-454c-b013-17d78d86cca3-kube-api-access-dfw9v\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.641776 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1712f5df-d8e4-41d4-93e0-280b68db7631\") pod \"ovsdbserver-sb-0\" (UID: \"220c5de1-b4bf-454c-b013-17d78d86cca3\") " pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:45 crc kubenswrapper[4808]: I0217 16:12:45.708183 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.682982 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg"] Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.684261 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.690610 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.690734 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.690747 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-7v6q4" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.690871 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.690900 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.704014 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg"] Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.706916 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7t4x\" (UniqueName: \"kubernetes.io/projected/4fa85572-1552-4a27-8974-b1e2d376167c-kube-api-access-h7t4x\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.707106 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.707215 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa85572-1552-4a27-8974-b1e2d376167c-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.707260 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.707288 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.808978 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.809048 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa85572-1552-4a27-8974-b1e2d376167c-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.809071 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.809090 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.809150 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7t4x\" (UniqueName: \"kubernetes.io/projected/4fa85572-1552-4a27-8974-b1e2d376167c-kube-api-access-h7t4x\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.810213 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.810277 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa85572-1552-4a27-8974-b1e2d376167c-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.827159 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.830188 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/4fa85572-1552-4a27-8974-b1e2d376167c-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.856498 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7t4x\" (UniqueName: \"kubernetes.io/projected/4fa85572-1552-4a27-8974-b1e2d376167c-kube-api-access-h7t4x\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-zfhfg\" (UID: \"4fa85572-1552-4a27-8974-b1e2d376167c\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.923903 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k"] Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.925236 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.927753 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k"] Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.931816 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.932053 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.932274 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 17 16:12:46 crc kubenswrapper[4808]: I0217 16:12:46.994396 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.034435 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.038803 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.039027 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.040922 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.050831 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.050888 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.051036 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6df15762-0f06-48ff-89bf-00f5118c6ced-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.051197 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.051271 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28nlg\" (UniqueName: \"kubernetes.io/projected/6df15762-0f06-48ff-89bf-00f5118c6ced-kube-api-access-28nlg\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.051322 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.088413 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.127047 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.128215 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.131561 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.132465 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.132637 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.133075 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.133229 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-gwrp6" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.133323 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.135414 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.136348 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.144274 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.155745 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.156060 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.156193 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.156716 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.156876 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.157412 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.157559 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be29c259-d619-4326-b866-2a8560d9b818-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.157720 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.158216 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.158354 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28nlg\" (UniqueName: \"kubernetes.io/projected/6df15762-0f06-48ff-89bf-00f5118c6ced-kube-api-access-28nlg\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.158744 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.158892 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.159492 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.159623 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.159713 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.159793 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.159895 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2x5n\" (UniqueName: \"kubernetes.io/projected/be29c259-d619-4326-b866-2a8560d9b818-kube-api-access-c2x5n\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.160476 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6df15762-0f06-48ff-89bf-00f5118c6ced-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.160601 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkrw8\" (UniqueName: \"kubernetes.io/projected/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-kube-api-access-gkrw8\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.160723 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.163481 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.165573 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.165708 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.166385 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6df15762-0f06-48ff-89bf-00f5118c6ced-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.175156 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.179195 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/6df15762-0f06-48ff-89bf-00f5118c6ced-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.199441 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28nlg\" (UniqueName: \"kubernetes.io/projected/6df15762-0f06-48ff-89bf-00f5118c6ced-kube-api-access-28nlg\") pod \"cloudkitty-lokistack-querier-58c84b5844-pkj8k\" (UID: \"6df15762-0f06-48ff-89bf-00f5118c6ced\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.214835 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.262754 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263380 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263409 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263435 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263546 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263567 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263599 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263645 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.263663 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be29c259-d619-4326-b866-2a8560d9b818-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.264043 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.264898 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.266325 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be29c259-d619-4326-b866-2a8560d9b818-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.266607 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.266807 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.266887 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297vp\" (UniqueName: \"kubernetes.io/projected/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-kube-api-access-297vp\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.266952 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.266984 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.267048 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.267140 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.267173 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.267196 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.267214 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.267257 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.267277 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2x5n\" (UniqueName: \"kubernetes.io/projected/be29c259-d619-4326-b866-2a8560d9b818-kube-api-access-c2x5n\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.268495 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkrw8\" (UniqueName: \"kubernetes.io/projected/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-kube-api-access-gkrw8\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.268921 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.269409 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.270392 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.270729 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.271302 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.271735 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.273986 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.276005 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.276808 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/be29c259-d619-4326-b866-2a8560d9b818-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.277127 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.281310 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.291839 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2x5n\" (UniqueName: \"kubernetes.io/projected/be29c259-d619-4326-b866-2a8560d9b818-kube-api-access-c2x5n\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4\" (UID: \"be29c259-d619-4326-b866-2a8560d9b818\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.295555 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkrw8\" (UniqueName: \"kubernetes.io/projected/c4fa7a6a-b7fc-464c-b529-dcf8d20de97e-kube-api-access-gkrw8\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-77rbq\" (UID: \"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.357370 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370190 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297vp\" (UniqueName: \"kubernetes.io/projected/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-kube-api-access-297vp\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370229 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370256 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370289 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370310 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370329 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370385 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370401 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.370427 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.371636 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.372671 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.373230 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.373744 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.374357 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.374690 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.378096 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.385033 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.387214 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297vp\" (UniqueName: \"kubernetes.io/projected/dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0-kube-api-access-297vp\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-mdlhq\" (UID: \"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.468151 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.501391 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.831796 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.832950 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.835020 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.836867 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.846859 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.946903 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.948402 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.952462 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.952651 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.958341 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984260 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984308 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984347 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984368 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7929d5b-e791-419e-8039-50cc9f8202f2-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984383 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984449 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjkw\" (UniqueName: \"kubernetes.io/projected/c7929d5b-e791-419e-8039-50cc9f8202f2-kube-api-access-nhjkw\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984517 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:47 crc kubenswrapper[4808]: I0217 16:12:47.984551 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.023284 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.024457 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.029407 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.030321 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.051106 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086441 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086497 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086546 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086577 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086612 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086633 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086673 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086697 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7929d5b-e791-419e-8039-50cc9f8202f2-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086717 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086746 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5x5h\" (UniqueName: \"kubernetes.io/projected/c850b5fe-4c28-4136-8136-fae52e38371b-kube-api-access-g5x5h\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086772 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086798 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086832 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c850b5fe-4c28-4136-8136-fae52e38371b-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086879 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjkw\" (UniqueName: \"kubernetes.io/projected/c7929d5b-e791-419e-8039-50cc9f8202f2-kube-api-access-nhjkw\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.086906 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.087764 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.087804 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.092278 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7929d5b-e791-419e-8039-50cc9f8202f2-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.092317 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.092428 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.094386 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.104248 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7929d5b-e791-419e-8039-50cc9f8202f2-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.112340 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjkw\" (UniqueName: \"kubernetes.io/projected/c7929d5b-e791-419e-8039-50cc9f8202f2-kube-api-access-nhjkw\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.113382 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.114586 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"c7929d5b-e791-419e-8039-50cc9f8202f2\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190030 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190097 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190147 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190181 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190204 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpxpb\" (UniqueName: \"kubernetes.io/projected/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-kube-api-access-tpxpb\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190381 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5x5h\" (UniqueName: \"kubernetes.io/projected/c850b5fe-4c28-4136-8136-fae52e38371b-kube-api-access-g5x5h\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190422 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190452 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190480 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190506 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c850b5fe-4c28-4136-8136-fae52e38371b-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190544 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190577 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190694 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.190385 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.191500 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.191683 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c850b5fe-4c28-4136-8136-fae52e38371b-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.194777 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.194925 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.195540 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c850b5fe-4c28-4136-8136-fae52e38371b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.207945 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5x5h\" (UniqueName: \"kubernetes.io/projected/c850b5fe-4c28-4136-8136-fae52e38371b-kube-api-access-g5x5h\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.209142 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.211152 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"c850b5fe-4c28-4136-8136-fae52e38371b\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.270059 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.293637 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.293737 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpxpb\" (UniqueName: \"kubernetes.io/projected/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-kube-api-access-tpxpb\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.293790 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.293841 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.293874 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.293900 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.293979 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.296036 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.300669 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.302744 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.302854 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.320494 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.320985 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.322539 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpxpb\" (UniqueName: \"kubernetes.io/projected/d6dbebd3-2b7c-4afa-8937-5c47b749e8b0-kube-api-access-tpxpb\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: E0217 16:12:48.323221 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:12:48 crc kubenswrapper[4808]: E0217 16:12:48.323381 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d88gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8jstw_openstack(973eee94-2439-415c-b9b8-2f6f72738ac9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:12:48 crc kubenswrapper[4808]: E0217 16:12:48.324889 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" podUID="973eee94-2439-415c-b9b8-2f6f72738ac9" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.345950 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.352641 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:12:48 crc kubenswrapper[4808]: E0217 16:12:48.359224 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 17 16:12:48 crc kubenswrapper[4808]: E0217 16:12:48.359348 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2kwnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-g8xlz_openstack(38d70adc-e16e-4470-9b59-1c728c29318d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:12:48 crc kubenswrapper[4808]: E0217 16:12:48.361765 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" podUID="38d70adc-e16e-4470-9b59-1c728c29318d" Feb 17 16:12:48 crc kubenswrapper[4808]: I0217 16:12:48.751907 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:12:49 crc kubenswrapper[4808]: I0217 16:12:49.189982 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59be2048-a5c9-44c9-a3ef-651002555ff0","Type":"ContainerStarted","Data":"f86bb416640f1c93ce31ac0513d794573c83b4fcf30431f9c4619fd3c48ca73d"} Feb 17 16:12:49 crc kubenswrapper[4808]: I0217 16:12:49.408123 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 17 16:12:49 crc kubenswrapper[4808]: W0217 16:12:49.410286 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda020d38c_5e24_4266_96dc_9050e4d82f46.slice/crio-b661f963ccd127b4dcaef38f6d6413ba4a49bc3411581e5053b0b86666c263d1 WatchSource:0}: Error finding container b661f963ccd127b4dcaef38f6d6413ba4a49bc3411581e5053b0b86666c263d1: Status 404 returned error can't find the container with id b661f963ccd127b4dcaef38f6d6413ba4a49bc3411581e5053b0b86666c263d1 Feb 17 16:12:49 crc kubenswrapper[4808]: I0217 16:12:49.421130 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:12:49 crc kubenswrapper[4808]: I0217 16:12:49.888486 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pfcvm"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.005942 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.028634 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.037622 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kwnk\" (UniqueName: \"kubernetes.io/projected/38d70adc-e16e-4470-9b59-1c728c29318d-kube-api-access-2kwnk\") pod \"38d70adc-e16e-4470-9b59-1c728c29318d\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.037698 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-config\") pod \"38d70adc-e16e-4470-9b59-1c728c29318d\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.037762 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-dns-svc\") pod \"38d70adc-e16e-4470-9b59-1c728c29318d\" (UID: \"38d70adc-e16e-4470-9b59-1c728c29318d\") " Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.038354 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-config" (OuterVolumeSpecName: "config") pod "38d70adc-e16e-4470-9b59-1c728c29318d" (UID: "38d70adc-e16e-4470-9b59-1c728c29318d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.039026 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38d70adc-e16e-4470-9b59-1c728c29318d" (UID: "38d70adc-e16e-4470-9b59-1c728c29318d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.039062 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.039728 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.039746 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38d70adc-e16e-4470-9b59-1c728c29318d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:50 crc kubenswrapper[4808]: W0217 16:12:50.045056 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6df15762_0f06_48ff_89bf_00f5118c6ced.slice/crio-1d159a168bbd1922669ef46ab9dfc149a4e68d656a62cbcfc3691d5c0d8648f1 WatchSource:0}: Error finding container 1d159a168bbd1922669ef46ab9dfc149a4e68d656a62cbcfc3691d5c0d8648f1: Status 404 returned error can't find the container with id 1d159a168bbd1922669ef46ab9dfc149a4e68d656a62cbcfc3691d5c0d8648f1 Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.045215 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38d70adc-e16e-4470-9b59-1c728c29318d-kube-api-access-2kwnk" (OuterVolumeSpecName: "kube-api-access-2kwnk") pod "38d70adc-e16e-4470-9b59-1c728c29318d" (UID: "38d70adc-e16e-4470-9b59-1c728c29318d"). InnerVolumeSpecName "kube-api-access-2kwnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.085985 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wrzq"] Feb 17 16:12:50 crc kubenswrapper[4808]: W0217 16:12:50.135622 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ea38754_3b00_4bcb_93d9_28b60dda0e0a.slice/crio-8e584d33e0716dd03a9a8239a014677a0b4e6765f9efdd4b2ed136a42830d11a WatchSource:0}: Error finding container 8e584d33e0716dd03a9a8239a014677a0b4e6765f9efdd4b2ed136a42830d11a: Status 404 returned error can't find the container with id 8e584d33e0716dd03a9a8239a014677a0b4e6765f9efdd4b2ed136a42830d11a Feb 17 16:12:50 crc kubenswrapper[4808]: W0217 16:12:50.139822 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbac5f26b_ff81_49e2_854f_9cad23a57593.slice/crio-83aebd7060ebf58080acd8dda61d0160f4457ae1b4e3e4db27d61232cdd028e3 WatchSource:0}: Error finding container 83aebd7060ebf58080acd8dda61d0160f4457ae1b4e3e4db27d61232cdd028e3: Status 404 returned error can't find the container with id 83aebd7060ebf58080acd8dda61d0160f4457ae1b4e3e4db27d61232cdd028e3 Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.140314 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d88gz\" (UniqueName: \"kubernetes.io/projected/973eee94-2439-415c-b9b8-2f6f72738ac9-kube-api-access-d88gz\") pod \"973eee94-2439-415c-b9b8-2f6f72738ac9\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.140346 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973eee94-2439-415c-b9b8-2f6f72738ac9-config\") pod \"973eee94-2439-415c-b9b8-2f6f72738ac9\" (UID: \"973eee94-2439-415c-b9b8-2f6f72738ac9\") " Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.140712 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kwnk\" (UniqueName: \"kubernetes.io/projected/38d70adc-e16e-4470-9b59-1c728c29318d-kube-api-access-2kwnk\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.141091 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973eee94-2439-415c-b9b8-2f6f72738ac9-config" (OuterVolumeSpecName: "config") pod "973eee94-2439-415c-b9b8-2f6f72738ac9" (UID: "973eee94-2439-415c-b9b8-2f6f72738ac9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:12:50 crc kubenswrapper[4808]: W0217 16:12:50.143706 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podade81c90_5cdf_45d4_ad2f_52a3514e1596.slice/crio-38d81e1f90b082445ee66ef12a169b7e78ae9af1be78970dc6491d62d66db85d WatchSource:0}: Error finding container 38d81e1f90b082445ee66ef12a169b7e78ae9af1be78970dc6491d62d66db85d: Status 404 returned error can't find the container with id 38d81e1f90b082445ee66ef12a169b7e78ae9af1be78970dc6491d62d66db85d Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.144467 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973eee94-2439-415c-b9b8-2f6f72738ac9-kube-api-access-d88gz" (OuterVolumeSpecName: "kube-api-access-d88gz") pod "973eee94-2439-415c-b9b8-2f6f72738ac9" (UID: "973eee94-2439-415c-b9b8-2f6f72738ac9"). InnerVolumeSpecName "kube-api-access-d88gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:12:50 crc kubenswrapper[4808]: W0217 16:12:50.146373 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56f9931d_b010_4282_9068_16b2e4e4b247.slice/crio-25179e355abb25d773555e2205dd9a0a8245b979b1d8cf45a66e547633879c94 WatchSource:0}: Error finding container 25179e355abb25d773555e2205dd9a0a8245b979b1d8cf45a66e547633879c94: Status 404 returned error can't find the container with id 25179e355abb25d773555e2205dd9a0a8245b979b1d8cf45a66e547633879c94 Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.156452 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.164182 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.165643 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.165672 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-g8xlz" event={"ID":"38d70adc-e16e-4470-9b59-1c728c29318d","Type":"ContainerDied","Data":"36e351405a8f30735cdfbd65ebbfe018758adcc5855f9db2bc133ed0f4654c84"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.168674 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a020d38c-5e24-4266-96dc-9050e4d82f46","Type":"ContainerStarted","Data":"b661f963ccd127b4dcaef38f6d6413ba4a49bc3411581e5053b0b86666c263d1"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.169144 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.175911 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8sg8r"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.175950 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" event={"ID":"be29c259-d619-4326-b866-2a8560d9b818","Type":"ContainerStarted","Data":"082ca6b4e12db56a0a0d12947f1627dbd9e1570aebf8a6e79f97728342a05ecc"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.176889 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"698c36e9-5f87-4836-8660-aaceac669005","Type":"ContainerStarted","Data":"57ad7e9e95603b9e00dced5aff567d0fff1bbfb9d96b8bfdb7074f711d80c274"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.178295 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2ea38754-3b00-4bcb-93d9-28b60dda0e0a","Type":"ContainerStarted","Data":"8e584d33e0716dd03a9a8239a014677a0b4e6765f9efdd4b2ed136a42830d11a"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.179794 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"56f9931d-b010-4282-9068-16b2e4e4b247","Type":"ContainerStarted","Data":"25179e355abb25d773555e2205dd9a0a8245b979b1d8cf45a66e547633879c94"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.180654 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.181242 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" event={"ID":"24cc6fe1-da44-4d61-98bf-3088b398903b","Type":"ContainerStarted","Data":"4a7ab805f716d84e3d73f9394b1b45757927f27450dd37708e63205a258bb4f5"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.182460 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" event={"ID":"bac5f26b-ff81-49e2-854f-9cad23a57593","Type":"ContainerStarted","Data":"83aebd7060ebf58080acd8dda61d0160f4457ae1b4e3e4db27d61232cdd028e3"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.183816 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pfcvm" event={"ID":"8a76a2ff-ed1a-4279-898c-54e85973f024","Type":"ContainerStarted","Data":"48f92b9e6e4aae0fd714e91be23901f5268bea1eaceba93c5365d9d0bcb08fa6"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.186122 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.187971 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerStarted","Data":"c5db49362fb8e196d602a48475009fd093a64b0b760100ed93c1a54dba3d1832"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.191401 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ade81c90-5cdf-45d4-ad2f-52a3514e1596","Type":"ContainerStarted","Data":"38d81e1f90b082445ee66ef12a169b7e78ae9af1be78970dc6491d62d66db85d"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.193369 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" event={"ID":"973eee94-2439-415c-b9b8-2f6f72738ac9","Type":"ContainerDied","Data":"8041177f9f605013ae787b3681b3a5558dd54bee858e7ca6318f63453fa6a01c"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.193401 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8jstw" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.195794 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" event={"ID":"6df15762-0f06-48ff-89bf-00f5118c6ced","Type":"ContainerStarted","Data":"1d159a168bbd1922669ef46ab9dfc149a4e68d656a62cbcfc3691d5c0d8648f1"} Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.244874 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d88gz\" (UniqueName: \"kubernetes.io/projected/973eee94-2439-415c-b9b8-2f6f72738ac9-kube-api-access-d88gz\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.244904 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973eee94-2439-415c-b9b8-2f6f72738ac9-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.256386 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g8xlz"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.271723 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-g8xlz"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.314041 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8jstw"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.324688 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8jstw"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.336803 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.356194 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.373773 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.385795 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-index-gateway,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=index-gateway -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-index-gateway-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tpxpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-index-gateway-0_openstack(d6dbebd3-2b7c-4afa-8937-5c47b749e8b0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.386994 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="d6dbebd3-2b7c-4afa-8937-5c47b749e8b0" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.388526 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.396458 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-distributor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=distributor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h7t4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-distributor-585d9bcbc-zfhfg_openstack(4fa85572-1552-4a27-8974-b1e2d376167c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.397871 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" podUID="4fa85572-1552-4a27-8974-b1e2d376167c" Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.401882 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:gateway,Image:registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934,Command:[],Args:[--debug.name=lokistack-gateway --web.listen=0.0.0.0:8080 --web.internal.listen=0.0.0.0:8081 --web.healthchecks.url=https://localhost:8080 --log.level=warn --logs.read.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.tail.endpoint=https://cloudkitty-lokistack-query-frontend-http.openstack.svc.cluster.local:3100 --logs.write.endpoint=https://cloudkitty-lokistack-distributor-http.openstack.svc.cluster.local:3100 --logs.write-timeout=4m0s --rbac.config=/etc/lokistack-gateway/rbac.yaml --tenants.config=/etc/lokistack-gateway/tenants.yaml --server.read-timeout=48s --server.write-timeout=6m0s --tls.min-version=VersionTLS12 --tls.server.cert-file=/var/run/tls/http/server/tls.crt --tls.server.key-file=/var/run/tls/http/server/tls.key --tls.healthchecks.server-ca-file=/var/run/ca/server/service-ca.crt --tls.healthchecks.server-name=cloudkitty-lokistack-gateway-http.openstack.svc.cluster.local --tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt --tls.internal.server.key-file=/var/run/tls/http/server/tls.key --tls.min-version=VersionTLS12 --tls.cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logs.tls.ca-file=/var/run/ca/upstream/service-ca.crt --logs.tls.cert-file=/var/run/tls/http/upstream/tls.crt --logs.tls.key-file=/var/run/tls/http/upstream/tls.key --tls.client-auth-type=RequestClientCert],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},ContainerPort{Name:public,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rbac,ReadOnly:true,MountPath:/etc/lokistack-gateway/rbac.yaml,SubPath:rbac.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tenants,ReadOnly:true,MountPath:/etc/lokistack-gateway/tenants.yaml,SubPath:tenants.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lokistack-gateway,ReadOnly:true,MountPath:/etc/lokistack-gateway/lokistack-gateway.rego,SubPath:lokistack-gateway.rego,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-secret,ReadOnly:true,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-client-http,ReadOnly:true,MountPath:/var/run/tls/http/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/upstream,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-gateway-ca-bundle,ReadOnly:true,MountPath:/var/run/ca/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-ca-bundle,ReadOnly:false,MountPath:/var/run/tenants-ca/cloudkitty,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gkrw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/live,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8081 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-gateway-7f8685b49f-77rbq_openstack(c4fa7a6a-b7fc-464c-b529-dcf8d20de97e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.404338 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" podUID="c4fa7a6a-b7fc-464c-b529-dcf8d20de97e" Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.404528 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-compactor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=compactor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-compactor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-compactor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5x5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-compactor-0_openstack(c850b5fe-4c28-4136-8136-fae52e38371b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 17 16:12:50 crc kubenswrapper[4808]: E0217 16:12:50.405783 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="c850b5fe-4c28-4136-8136-fae52e38371b" Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.424389 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.437685 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg"] Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.447848 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq"] Feb 17 16:12:50 crc kubenswrapper[4808]: W0217 16:12:50.513650 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c434a76_4dcf_4c69_aefa_5cda8b120a26.slice/crio-98ee19382e2fd4eea1cfca969f2386b40dbc276d79b826c8e0a4477fb46127a4 WatchSource:0}: Error finding container 98ee19382e2fd4eea1cfca969f2386b40dbc276d79b826c8e0a4477fb46127a4: Status 404 returned error can't find the container with id 98ee19382e2fd4eea1cfca969f2386b40dbc276d79b826c8e0a4477fb46127a4 Feb 17 16:12:50 crc kubenswrapper[4808]: I0217 16:12:50.520440 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.161864 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38d70adc-e16e-4470-9b59-1c728c29318d" path="/var/lib/kubelet/pods/38d70adc-e16e-4470-9b59-1c728c29318d/volumes" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.162457 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973eee94-2439-415c-b9b8-2f6f72738ac9" path="/var/lib/kubelet/pods/973eee94-2439-415c-b9b8-2f6f72738ac9/volumes" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.204039 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"c7929d5b-e791-419e-8039-50cc9f8202f2","Type":"ContainerStarted","Data":"ac175af8c51c60196e3db1cdaa1115158cb3fe980bc2271fba02c2b52c653e09"} Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.204995 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8c434a76-4dcf-4c69-aefa-5cda8b120a26","Type":"ContainerStarted","Data":"98ee19382e2fd4eea1cfca969f2386b40dbc276d79b826c8e0a4477fb46127a4"} Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.206315 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a2bf674-1881-41e9-9c0f-93e8f14ac222","Type":"ContainerStarted","Data":"fe6c047a841d65d85a9f0e609ea1b96b4c6bc76859984c45d4fc65974fb15811"} Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.207327 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0","Type":"ContainerStarted","Data":"63329b52a0c8247b74093b8acc28b39c130f2ee05c18ab46ad443269a2d5312e"} Feb 17 16:12:51 crc kubenswrapper[4808]: E0217 16:12:51.208781 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="d6dbebd3-2b7c-4afa-8937-5c47b749e8b0" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.211448 4808 generic.go:334] "Generic (PLEG): container finished" podID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerID="5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d" exitCode=0 Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.211494 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" event={"ID":"24cc6fe1-da44-4d61-98bf-3088b398903b","Type":"ContainerDied","Data":"5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d"} Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.213486 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" event={"ID":"4fa85572-1552-4a27-8974-b1e2d376167c","Type":"ContainerStarted","Data":"087e41c46374c7d3fbc02456f1d41ea551c9e915163061c15c14bdcab6cad92e"} Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.214460 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" event={"ID":"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e","Type":"ContainerStarted","Data":"feadba16ace8e9ce88dd690f086be86ebf2a225876af032846cd52e794d3b6a1"} Feb 17 16:12:51 crc kubenswrapper[4808]: E0217 16:12:51.215495 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" podUID="4fa85572-1552-4a27-8974-b1e2d376167c" Feb 17 16:12:51 crc kubenswrapper[4808]: E0217 16:12:51.216072 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" podUID="c4fa7a6a-b7fc-464c-b529-dcf8d20de97e" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.223477 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" event={"ID":"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0","Type":"ContainerStarted","Data":"c7fb597c1c2f36ad981298a1d507b4e4aae1c17ec9b1318e1b62e7efe004abd2"} Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.225950 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"c850b5fe-4c28-4136-8136-fae52e38371b","Type":"ContainerStarted","Data":"92ba2bbb03d437b99f78a1aae60b10118b23cff12e044974d037b8b0e94570f5"} Feb 17 16:12:51 crc kubenswrapper[4808]: E0217 16:12:51.227386 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="c850b5fe-4c28-4136-8136-fae52e38371b" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.241891 4808 generic.go:334] "Generic (PLEG): container finished" podID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerID="33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce" exitCode=0 Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.241968 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" event={"ID":"bac5f26b-ff81-49e2-854f-9cad23a57593","Type":"ContainerDied","Data":"33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce"} Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.468804 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.572154 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-wkzp6"] Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.592415 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.592464 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.592502 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.593134 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"12b4e957316b11ee081f9acecacedfdbabeee0248dc83ade7fe5f8b084a798ba"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:12:51 crc kubenswrapper[4808]: I0217 16:12:51.593180 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://12b4e957316b11ee081f9acecacedfdbabeee0248dc83ade7fe5f8b084a798ba" gracePeriod=600 Feb 17 16:12:51 crc kubenswrapper[4808]: W0217 16:12:51.995192 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod220c5de1_b4bf_454c_b013_17d78d86cca3.slice/crio-e5865d5bc9f70b4f0846b6ae06a0bf8e8a806db07740cf0303d524d08a4ecea1 WatchSource:0}: Error finding container e5865d5bc9f70b4f0846b6ae06a0bf8e8a806db07740cf0303d524d08a4ecea1: Status 404 returned error can't find the container with id e5865d5bc9f70b4f0846b6ae06a0bf8e8a806db07740cf0303d524d08a4ecea1 Feb 17 16:12:52 crc kubenswrapper[4808]: I0217 16:12:52.291110 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="12b4e957316b11ee081f9acecacedfdbabeee0248dc83ade7fe5f8b084a798ba" exitCode=0 Feb 17 16:12:52 crc kubenswrapper[4808]: I0217 16:12:52.291209 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"12b4e957316b11ee081f9acecacedfdbabeee0248dc83ade7fe5f8b084a798ba"} Feb 17 16:12:52 crc kubenswrapper[4808]: I0217 16:12:52.291255 4808 scope.go:117] "RemoveContainer" containerID="284430f1fb330ef6ae53b6d6dd49c2af767ae61ae02d682d5cba6dbd7c4ce02d" Feb 17 16:12:52 crc kubenswrapper[4808]: I0217 16:12:52.299467 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"220c5de1-b4bf-454c-b013-17d78d86cca3","Type":"ContainerStarted","Data":"e5865d5bc9f70b4f0846b6ae06a0bf8e8a806db07740cf0303d524d08a4ecea1"} Feb 17 16:12:52 crc kubenswrapper[4808]: E0217 16:12:52.307248 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" podUID="4fa85572-1552-4a27-8974-b1e2d376167c" Feb 17 16:12:52 crc kubenswrapper[4808]: E0217 16:12:52.307665 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="c850b5fe-4c28-4136-8136-fae52e38371b" Feb 17 16:12:52 crc kubenswrapper[4808]: E0217 16:12:52.307743 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-index-gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" podUID="d6dbebd3-2b7c-4afa-8937-5c47b749e8b0" Feb 17 16:12:52 crc kubenswrapper[4808]: E0217 16:12:52.307781 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gateway\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/lokistack-gateway-rhel9@sha256:41eda20b890c200ee7fce0b56b5d168445cd9a6486d560f39ce73d0704e03934\\\"\"" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" podUID="c4fa7a6a-b7fc-464c-b529-dcf8d20de97e" Feb 17 16:12:54 crc kubenswrapper[4808]: I0217 16:12:54.314381 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wkzp6" event={"ID":"30b7fc5a-690b-4ac6-b37c-9c1ec074f962","Type":"ContainerStarted","Data":"b7e5aef974fc8a45b3d23dcb43254aa563342f33d66ab4d6df979b8972ab7483"} Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.300270 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.301236 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sh7d7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(2917eca2-0431-4bd6-ad96-ab8464cc4fd7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.302195 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.302488 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.302807 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-querier,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=querier -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-querier-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28nlg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-querier-58c84b5844-pkj8k_openstack(6df15762-0f06-48ff-89bf-00f5118c6ced): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.304391 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" podUID="6df15762-0f06-48ff-89bf-00f5118c6ced" Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.387121 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-querier\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" podUID="6df15762-0f06-48ff-89bf-00f5118c6ced" Feb 17 16:13:02 crc kubenswrapper[4808]: E0217 16:13:02.387212 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" Feb 17 16:13:03 crc kubenswrapper[4808]: E0217 16:13:03.076279 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 17 16:13:03 crc kubenswrapper[4808]: E0217 16:13:03.076483 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n88h9dh57bh676h554h58fhdch656h597h556hd9h666h5bchddh56ch57fhf4h659h54bh558h665h5bbh575h8bh685h5ffhc4h5ch5d6hddh646h545q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqlrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(2ea38754-3b00-4bcb-93d9-28b60dda0e0a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:13:03 crc kubenswrapper[4808]: E0217 16:13:03.077646 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="2ea38754-3b00-4bcb-93d9-28b60dda0e0a" Feb 17 16:13:03 crc kubenswrapper[4808]: E0217 16:13:03.392009 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="2ea38754-3b00-4bcb-93d9-28b60dda0e0a" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.130534 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.131013 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pjb7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(ade81c90-5cdf-45d4-ad2f-52a3514e1596): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.132734 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="ade81c90-5cdf-45d4-ad2f-52a3514e1596" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.397477 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.397664 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfxgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(a020d38c-5e24-4266-96dc-9050e4d82f46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.398932 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="a020d38c-5e24-4266-96dc-9050e4d82f46" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.419702 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="ade81c90-5cdf-45d4-ad2f-52a3514e1596" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.419728 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="a020d38c-5e24-4266-96dc-9050e4d82f46" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.700652 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Feb 17 16:13:06 crc kubenswrapper[4808]: E0217 16:13:06.700839 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56ch549h694h697h8h589h5b4h578h5cbhbdh684hc8h57bh575h4h7ch576h5f7h88h68ch699h88h5ddh697h94h5f4h58h55dh5dh57bh6fh65cq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dpcqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(8c434a76-4dcf-4c69-aefa-5cda8b120a26): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:13:08 crc kubenswrapper[4808]: I0217 16:13:08.444035 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"34e69d9ce6b54cc95e099ff98c49ef8661be9798a1b5f5a56fc276247e76ba49"} Feb 17 16:13:08 crc kubenswrapper[4808]: E0217 16:13:08.657803 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 17 16:13:08 crc kubenswrapper[4808]: E0217 16:13:08.657870 4808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 17 16:13:08 crc kubenswrapper[4808]: E0217 16:13:08.658033 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jrnn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(0a2bf674-1881-41e9-9c0f-93e8f14ac222): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 17 16:13:08 crc kubenswrapper[4808]: E0217 16:13:08.659880 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" Feb 17 16:13:09 crc kubenswrapper[4808]: I0217 16:13:09.460486 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" event={"ID":"24cc6fe1-da44-4d61-98bf-3088b398903b","Type":"ContainerStarted","Data":"3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce"} Feb 17 16:13:09 crc kubenswrapper[4808]: I0217 16:13:09.461605 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:13:09 crc kubenswrapper[4808]: E0217 16:13:09.477058 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" Feb 17 16:13:09 crc kubenswrapper[4808]: I0217 16:13:09.501442 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" podStartSLOduration=38.903571664 podStartE2EDuration="39.501421161s" podCreationTimestamp="2026-02-17 16:12:30 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.06274317 +0000 UTC m=+1133.579102243" lastFinishedPulling="2026-02-17 16:12:50.660592667 +0000 UTC m=+1134.176951740" observedRunningTime="2026-02-17 16:13:09.494471472 +0000 UTC m=+1153.010830545" watchObservedRunningTime="2026-02-17 16:13:09.501421161 +0000 UTC m=+1153.017780234" Feb 17 16:13:10 crc kubenswrapper[4808]: I0217 16:13:10.472266 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" event={"ID":"be29c259-d619-4326-b866-2a8560d9b818","Type":"ContainerStarted","Data":"ad1db0549960832f0c52d19a16630dbc313a477607dbb1efac4387c34900ecb9"} Feb 17 16:13:10 crc kubenswrapper[4808]: I0217 16:13:10.473768 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:13:10 crc kubenswrapper[4808]: I0217 16:13:10.513319 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" podStartSLOduration=7.104670292 podStartE2EDuration="24.513295388s" podCreationTimestamp="2026-02-17 16:12:46 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.104000007 +0000 UTC m=+1133.620359080" lastFinishedPulling="2026-02-17 16:13:07.512625093 +0000 UTC m=+1151.028984176" observedRunningTime="2026-02-17 16:13:10.511822128 +0000 UTC m=+1154.028181211" watchObservedRunningTime="2026-02-17 16:13:10.513295388 +0000 UTC m=+1154.029654471" Feb 17 16:13:11 crc kubenswrapper[4808]: E0217 16:13:11.238696 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="8c434a76-4dcf-4c69-aefa-5cda8b120a26" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.485363 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"c850b5fe-4c28-4136-8136-fae52e38371b","Type":"ContainerStarted","Data":"365d1fda7dc08a45bbf79c14ba335b4273126085b4fea9654c779f8c356a92d4"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.486143 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.488480 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8c434a76-4dcf-4c69-aefa-5cda8b120a26","Type":"ContainerStarted","Data":"d056ba09093e2b7fcfc74f1bbf4fae4b8d0c36df395ee8b95e6ebeaf91c294e9"} Feb 17 16:13:11 crc kubenswrapper[4808]: E0217 16:13:11.492135 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="8c434a76-4dcf-4c69-aefa-5cda8b120a26" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.493416 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"d6dbebd3-2b7c-4afa-8937-5c47b749e8b0","Type":"ContainerStarted","Data":"fbda8631bae74da6b76563d2704fb46101b4e20134f4b7d112690b3486ec41cf"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.493778 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.496414 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wkzp6" event={"ID":"30b7fc5a-690b-4ac6-b37c-9c1ec074f962","Type":"ContainerStarted","Data":"9668c0913113779d6a3c7f672c39d2f4905fbbea560063417a4444ac286de562"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.501358 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" event={"ID":"bac5f26b-ff81-49e2-854f-9cad23a57593","Type":"ContainerStarted","Data":"84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.501498 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.505866 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"c7929d5b-e791-419e-8039-50cc9f8202f2","Type":"ContainerStarted","Data":"1e0a3f64a1d9304e54c45d6a329fe87b933bf3d74ea52279becd1608617a25aa"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.506097 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.508283 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" event={"ID":"dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0","Type":"ContainerStarted","Data":"0024fe61e7e5edce8a413484d1e11d9c581c5cc963e9ea54babc75e64715cd46"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.509069 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.514801 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=-9223372011.339989 podStartE2EDuration="25.514786634s" podCreationTimestamp="2026-02-17 16:12:46 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.404425401 +0000 UTC m=+1133.920784474" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:11.510861497 +0000 UTC m=+1155.027220600" watchObservedRunningTime="2026-02-17 16:13:11.514786634 +0000 UTC m=+1155.031145707" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.518130 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" event={"ID":"4fa85572-1552-4a27-8974-b1e2d376167c","Type":"ContainerStarted","Data":"f98d913f2e956d9c296144d39839f95499e60c922196f5702dc321f27dfa499c"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.518502 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.522023 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" event={"ID":"c4fa7a6a-b7fc-464c-b529-dcf8d20de97e","Type":"ContainerStarted","Data":"aa839321232d9ef7ebe06b138c51f6a574df0569526c3cedb08419ce7f22a465"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.524422 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.527872 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pfcvm" event={"ID":"8a76a2ff-ed1a-4279-898c-54e85973f024","Type":"ContainerStarted","Data":"62fcd90b140ef708febe681e2940a5eb938b5105c6ca9115b5284e8bef67d870"} Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.527920 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-pfcvm" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.531320 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.537018 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.584405 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" podStartSLOduration=41.069339022 podStartE2EDuration="41.584381158s" podCreationTimestamp="2026-02-17 16:12:30 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.146333683 +0000 UTC m=+1133.662692756" lastFinishedPulling="2026-02-17 16:12:50.661375819 +0000 UTC m=+1134.177734892" observedRunningTime="2026-02-17 16:13:11.570524793 +0000 UTC m=+1155.086883916" watchObservedRunningTime="2026-02-17 16:13:11.584381158 +0000 UTC m=+1155.100740231" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.614944 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-mdlhq" podStartSLOduration=7.170532481 podStartE2EDuration="24.614922735s" podCreationTimestamp="2026-02-17 16:12:47 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.378583852 +0000 UTC m=+1133.894942925" lastFinishedPulling="2026-02-17 16:13:07.822974106 +0000 UTC m=+1151.339333179" observedRunningTime="2026-02-17 16:13:11.611221875 +0000 UTC m=+1155.127580968" watchObservedRunningTime="2026-02-17 16:13:11.614922735 +0000 UTC m=+1155.131281808" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.617664 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=6.938370954 podStartE2EDuration="24.617642228s" podCreationTimestamp="2026-02-17 16:12:47 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.38556406 +0000 UTC m=+1133.901923133" lastFinishedPulling="2026-02-17 16:13:08.064835304 +0000 UTC m=+1151.581194407" observedRunningTime="2026-02-17 16:13:11.594984475 +0000 UTC m=+1155.111343568" watchObservedRunningTime="2026-02-17 16:13:11.617642228 +0000 UTC m=+1155.134001311" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.652184 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=8.303942073 podStartE2EDuration="25.652162614s" podCreationTimestamp="2026-02-17 16:12:46 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.385503269 +0000 UTC m=+1133.901862342" lastFinishedPulling="2026-02-17 16:13:07.73372381 +0000 UTC m=+1151.250082883" observedRunningTime="2026-02-17 16:13:11.642991755 +0000 UTC m=+1155.159350828" watchObservedRunningTime="2026-02-17 16:13:11.652162614 +0000 UTC m=+1155.168521677" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.670742 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-pfcvm" podStartSLOduration=13.965100699 podStartE2EDuration="31.670726006s" podCreationTimestamp="2026-02-17 16:12:40 +0000 UTC" firstStartedPulling="2026-02-17 16:12:49.897720012 +0000 UTC m=+1133.414079085" lastFinishedPulling="2026-02-17 16:13:07.603345319 +0000 UTC m=+1151.119704392" observedRunningTime="2026-02-17 16:13:11.665956127 +0000 UTC m=+1155.182315200" watchObservedRunningTime="2026-02-17 16:13:11.670726006 +0000 UTC m=+1155.187085079" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.692171 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" podStartSLOduration=8.372070016 podStartE2EDuration="25.692155236s" podCreationTimestamp="2026-02-17 16:12:46 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.396235409 +0000 UTC m=+1133.912594482" lastFinishedPulling="2026-02-17 16:13:07.716320629 +0000 UTC m=+1151.232679702" observedRunningTime="2026-02-17 16:13:11.684882949 +0000 UTC m=+1155.201242032" watchObservedRunningTime="2026-02-17 16:13:11.692155236 +0000 UTC m=+1155.208514299" Feb 17 16:13:11 crc kubenswrapper[4808]: I0217 16:13:11.716447 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-77rbq" podStartSLOduration=7.067012269 podStartE2EDuration="24.716427504s" podCreationTimestamp="2026-02-17 16:12:47 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.401747919 +0000 UTC m=+1133.918106992" lastFinishedPulling="2026-02-17 16:13:08.051163154 +0000 UTC m=+1151.567522227" observedRunningTime="2026-02-17 16:13:11.704375567 +0000 UTC m=+1155.220734640" watchObservedRunningTime="2026-02-17 16:13:11.716427504 +0000 UTC m=+1155.232786577" Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.541379 4808 generic.go:334] "Generic (PLEG): container finished" podID="30b7fc5a-690b-4ac6-b37c-9c1ec074f962" containerID="9668c0913113779d6a3c7f672c39d2f4905fbbea560063417a4444ac286de562" exitCode=0 Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.541452 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wkzp6" event={"ID":"30b7fc5a-690b-4ac6-b37c-9c1ec074f962","Type":"ContainerDied","Data":"9668c0913113779d6a3c7f672c39d2f4905fbbea560063417a4444ac286de562"} Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.551792 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"698c36e9-5f87-4836-8660-aaceac669005","Type":"ContainerStarted","Data":"19fb997acb847b4585d9f3a1732ebf382a63b29716209b27bb21be0c936a6430"} Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.554626 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59be2048-a5c9-44c9-a3ef-651002555ff0","Type":"ContainerStarted","Data":"5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9"} Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.559670 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"56f9931d-b010-4282-9068-16b2e4e4b247","Type":"ContainerStarted","Data":"eaab0a6bfd8b2f49bb5b0419ebf83f83f3a7d7db298ba6d150f0ad5ee4951a2a"} Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.565871 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"220c5de1-b4bf-454c-b013-17d78d86cca3","Type":"ContainerStarted","Data":"af3e2a009a7197d0992be49640be58e7c23e3d5086195401a2da944ebba0e803"} Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.567099 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"220c5de1-b4bf-454c-b013-17d78d86cca3","Type":"ContainerStarted","Data":"4e7c685fa6fff63dbe53be62bc471d8379634655c88d7bbf8d325e45d53ca65c"} Feb 17 16:13:12 crc kubenswrapper[4808]: E0217 16:13:12.571120 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="8c434a76-4dcf-4c69-aefa-5cda8b120a26" Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.694370 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.622768516 podStartE2EDuration="28.694350311s" podCreationTimestamp="2026-02-17 16:12:44 +0000 UTC" firstStartedPulling="2026-02-17 16:12:51.99769176 +0000 UTC m=+1135.514050833" lastFinishedPulling="2026-02-17 16:13:08.069273525 +0000 UTC m=+1151.585632628" observedRunningTime="2026-02-17 16:13:12.692485651 +0000 UTC m=+1156.208844724" watchObservedRunningTime="2026-02-17 16:13:12.694350311 +0000 UTC m=+1156.210709394" Feb 17 16:13:12 crc kubenswrapper[4808]: I0217 16:13:12.708895 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 17 16:13:13 crc kubenswrapper[4808]: I0217 16:13:13.580554 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wkzp6" event={"ID":"30b7fc5a-690b-4ac6-b37c-9c1ec074f962","Type":"ContainerStarted","Data":"6a1c89af93d94efd5543256071b315797cc20e0d74a7e5c42b8ddd0d1c80278d"} Feb 17 16:13:13 crc kubenswrapper[4808]: I0217 16:13:13.580636 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-wkzp6" event={"ID":"30b7fc5a-690b-4ac6-b37c-9c1ec074f962","Type":"ContainerStarted","Data":"17b1effd602c5d79c34fa01cdf78b27d41c205829b975aff02552a21c69842e5"} Feb 17 16:13:13 crc kubenswrapper[4808]: I0217 16:13:13.620160 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-wkzp6" podStartSLOduration=19.65116461 podStartE2EDuration="33.620129547s" podCreationTimestamp="2026-02-17 16:12:40 +0000 UTC" firstStartedPulling="2026-02-17 16:12:53.635301958 +0000 UTC m=+1137.151661031" lastFinishedPulling="2026-02-17 16:13:07.604266885 +0000 UTC m=+1151.120625968" observedRunningTime="2026-02-17 16:13:13.602035577 +0000 UTC m=+1157.118394690" watchObservedRunningTime="2026-02-17 16:13:13.620129547 +0000 UTC m=+1157.136488660" Feb 17 16:13:14 crc kubenswrapper[4808]: I0217 16:13:14.586765 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:13:14 crc kubenswrapper[4808]: I0217 16:13:14.587236 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:13:15 crc kubenswrapper[4808]: I0217 16:13:15.617793 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:13:15 crc kubenswrapper[4808]: I0217 16:13:15.708673 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 17 16:13:15 crc kubenswrapper[4808]: I0217 16:13:15.753668 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 17 16:13:15 crc kubenswrapper[4808]: I0217 16:13:15.923841 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:13:15 crc kubenswrapper[4808]: I0217 16:13:15.978750 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8sg8r"] Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.602750 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" event={"ID":"6df15762-0f06-48ff-89bf-00f5118c6ced","Type":"ContainerStarted","Data":"b375cfb7110702e40d0ee78d64b6a20b4645c6a0ae1c5f875a9acfef15ecbf18"} Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.603380 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" podUID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerName="dnsmasq-dns" containerID="cri-o://84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a" gracePeriod=10 Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.629221 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" podStartSLOduration=-9223372006.225574 podStartE2EDuration="30.629201659s" podCreationTimestamp="2026-02-17 16:12:46 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.062847413 +0000 UTC m=+1133.579206476" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:16.623551316 +0000 UTC m=+1160.139910419" watchObservedRunningTime="2026-02-17 16:13:16.629201659 +0000 UTC m=+1160.145560732" Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.655851 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.960463 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7j4gd"] Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.971185 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.976480 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 17 16:13:16 crc kubenswrapper[4808]: I0217 16:13:16.987324 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7j4gd"] Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.000967 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-qh29t"] Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.002389 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.006226 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.058106 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qh29t"] Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.077785 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/52d5a09f-33dd-49cf-9a31-a21d73a43b86-ovn-rundir\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.077838 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-config\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.077879 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d5a09f-33dd-49cf-9a31-a21d73a43b86-combined-ca-bundle\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.077927 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.077966 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d5a09f-33dd-49cf-9a31-a21d73a43b86-config\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.078001 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmhss\" (UniqueName: \"kubernetes.io/projected/52d5a09f-33dd-49cf-9a31-a21d73a43b86-kube-api-access-tmhss\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.078089 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.078129 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d5a09f-33dd-49cf-9a31-a21d73a43b86-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.078165 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/52d5a09f-33dd-49cf-9a31-a21d73a43b86-ovs-rundir\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.078212 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jllqw\" (UniqueName: \"kubernetes.io/projected/b1602c17-564b-482f-b5cc-cadd68ec07da-kube-api-access-jllqw\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.128155 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187016 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187076 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d5a09f-33dd-49cf-9a31-a21d73a43b86-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187105 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/52d5a09f-33dd-49cf-9a31-a21d73a43b86-ovs-rundir\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187137 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jllqw\" (UniqueName: \"kubernetes.io/projected/b1602c17-564b-482f-b5cc-cadd68ec07da-kube-api-access-jllqw\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187182 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/52d5a09f-33dd-49cf-9a31-a21d73a43b86-ovn-rundir\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187202 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-config\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187234 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d5a09f-33dd-49cf-9a31-a21d73a43b86-combined-ca-bundle\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187271 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187292 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d5a09f-33dd-49cf-9a31-a21d73a43b86-config\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.187315 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmhss\" (UniqueName: \"kubernetes.io/projected/52d5a09f-33dd-49cf-9a31-a21d73a43b86-kube-api-access-tmhss\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.188274 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/52d5a09f-33dd-49cf-9a31-a21d73a43b86-ovn-rundir\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.190608 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.190629 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52d5a09f-33dd-49cf-9a31-a21d73a43b86-config\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.190684 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-config\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.191126 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/52d5a09f-33dd-49cf-9a31-a21d73a43b86-ovs-rundir\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.191292 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.197971 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d5a09f-33dd-49cf-9a31-a21d73a43b86-combined-ca-bundle\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.210248 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmhss\" (UniqueName: \"kubernetes.io/projected/52d5a09f-33dd-49cf-9a31-a21d73a43b86-kube-api-access-tmhss\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.217335 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/52d5a09f-33dd-49cf-9a31-a21d73a43b86-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qh29t\" (UID: \"52d5a09f-33dd-49cf-9a31-a21d73a43b86\") " pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.220491 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jllqw\" (UniqueName: \"kubernetes.io/projected/b1602c17-564b-482f-b5cc-cadd68ec07da-kube-api-access-jllqw\") pod \"dnsmasq-dns-7f896c8c65-7j4gd\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.278002 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.290285 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdvxp\" (UniqueName: \"kubernetes.io/projected/bac5f26b-ff81-49e2-854f-9cad23a57593-kube-api-access-tdvxp\") pod \"bac5f26b-ff81-49e2-854f-9cad23a57593\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.290387 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-dns-svc\") pod \"bac5f26b-ff81-49e2-854f-9cad23a57593\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.290431 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-config\") pod \"bac5f26b-ff81-49e2-854f-9cad23a57593\" (UID: \"bac5f26b-ff81-49e2-854f-9cad23a57593\") " Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.299976 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac5f26b-ff81-49e2-854f-9cad23a57593-kube-api-access-tdvxp" (OuterVolumeSpecName: "kube-api-access-tdvxp") pod "bac5f26b-ff81-49e2-854f-9cad23a57593" (UID: "bac5f26b-ff81-49e2-854f-9cad23a57593"). InnerVolumeSpecName "kube-api-access-tdvxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.324891 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.325955 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7j4gd"] Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.334831 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qh29t" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.346643 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bac5f26b-ff81-49e2-854f-9cad23a57593" (UID: "bac5f26b-ff81-49e2-854f-9cad23a57593"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.359301 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-config" (OuterVolumeSpecName: "config") pod "bac5f26b-ff81-49e2-854f-9cad23a57593" (UID: "bac5f26b-ff81-49e2-854f-9cad23a57593"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.372696 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v8hvr"] Feb 17 16:13:17 crc kubenswrapper[4808]: E0217 16:13:17.373484 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerName="dnsmasq-dns" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.373496 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerName="dnsmasq-dns" Feb 17 16:13:17 crc kubenswrapper[4808]: E0217 16:13:17.373518 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerName="init" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.373524 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerName="init" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.373712 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerName="dnsmasq-dns" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.374672 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.384332 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.405926 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdvxp\" (UniqueName: \"kubernetes.io/projected/bac5f26b-ff81-49e2-854f-9cad23a57593-kube-api-access-tdvxp\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.405985 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.405995 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac5f26b-ff81-49e2-854f-9cad23a57593-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.412401 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v8hvr"] Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.507696 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rj9g\" (UniqueName: \"kubernetes.io/projected/27d2df02-b7e7-4fe9-a125-5a6acf093c85-kube-api-access-6rj9g\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.507754 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-config\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.507821 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.507851 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.507896 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.613777 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rj9g\" (UniqueName: \"kubernetes.io/projected/27d2df02-b7e7-4fe9-a125-5a6acf093c85-kube-api-access-6rj9g\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.614149 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-config\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.614219 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.614260 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.614318 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.615921 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.616234 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-config\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.616830 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.619198 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.626615 4808 generic.go:334] "Generic (PLEG): container finished" podID="bac5f26b-ff81-49e2-854f-9cad23a57593" containerID="84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a" exitCode=0 Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.627795 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.630840 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" event={"ID":"bac5f26b-ff81-49e2-854f-9cad23a57593","Type":"ContainerDied","Data":"84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a"} Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.630910 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-8sg8r" event={"ID":"bac5f26b-ff81-49e2-854f-9cad23a57593","Type":"ContainerDied","Data":"83aebd7060ebf58080acd8dda61d0160f4457ae1b4e3e4db27d61232cdd028e3"} Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.630932 4808 scope.go:117] "RemoveContainer" containerID="84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.640349 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rj9g\" (UniqueName: \"kubernetes.io/projected/27d2df02-b7e7-4fe9-a125-5a6acf093c85-kube-api-access-6rj9g\") pod \"dnsmasq-dns-86db49b7ff-v8hvr\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.670784 4808 scope.go:117] "RemoveContainer" containerID="33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.680691 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8sg8r"] Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.696176 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-8sg8r"] Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.708714 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.712324 4808 scope.go:117] "RemoveContainer" containerID="84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a" Feb 17 16:13:17 crc kubenswrapper[4808]: E0217 16:13:17.714076 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a\": container with ID starting with 84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a not found: ID does not exist" containerID="84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.714109 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a"} err="failed to get container status \"84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a\": rpc error: code = NotFound desc = could not find container \"84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a\": container with ID starting with 84853abf40c69f53c1f33037c497f55962bc9212b54400898031ca8bed97c77a not found: ID does not exist" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.714137 4808 scope.go:117] "RemoveContainer" containerID="33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce" Feb 17 16:13:17 crc kubenswrapper[4808]: E0217 16:13:17.714438 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce\": container with ID starting with 33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce not found: ID does not exist" containerID="33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.714466 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce"} err="failed to get container status \"33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce\": rpc error: code = NotFound desc = could not find container \"33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce\": container with ID starting with 33437dcb06d23989d40121f3a469434526c25c910f4a2965d927d0bdfc5b08ce not found: ID does not exist" Feb 17 16:13:17 crc kubenswrapper[4808]: I0217 16:13:17.909091 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qh29t"] Feb 17 16:13:17 crc kubenswrapper[4808]: W0217 16:13:17.941799 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52d5a09f_33dd_49cf_9a31_a21d73a43b86.slice/crio-b61328a374915c00a61741b36d0de944f0cbd3fb4e900ff90643dd9e298dedf6 WatchSource:0}: Error finding container b61328a374915c00a61741b36d0de944f0cbd3fb4e900ff90643dd9e298dedf6: Status 404 returned error can't find the container with id b61328a374915c00a61741b36d0de944f0cbd3fb4e900ff90643dd9e298dedf6 Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.040364 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7j4gd"] Feb 17 16:13:18 crc kubenswrapper[4808]: W0217 16:13:18.051900 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1602c17_564b_482f_b5cc_cadd68ec07da.slice/crio-f063161929d5787f6080c4e7e94d4ec9783c2f61bba4d4ed1ee08f2ebd980f2e WatchSource:0}: Error finding container f063161929d5787f6080c4e7e94d4ec9783c2f61bba4d4ed1ee08f2ebd980f2e: Status 404 returned error can't find the container with id f063161929d5787f6080c4e7e94d4ec9783c2f61bba4d4ed1ee08f2ebd980f2e Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.349899 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v8hvr"] Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.650142 4808 generic.go:334] "Generic (PLEG): container finished" podID="b1602c17-564b-482f-b5cc-cadd68ec07da" containerID="02fe4733904170d6ff8ca546ae278d5400ac1f6b5e0058e060083b8b17f2a502" exitCode=0 Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.650241 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" event={"ID":"b1602c17-564b-482f-b5cc-cadd68ec07da","Type":"ContainerDied","Data":"02fe4733904170d6ff8ca546ae278d5400ac1f6b5e0058e060083b8b17f2a502"} Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.650274 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" event={"ID":"b1602c17-564b-482f-b5cc-cadd68ec07da","Type":"ContainerStarted","Data":"f063161929d5787f6080c4e7e94d4ec9783c2f61bba4d4ed1ee08f2ebd980f2e"} Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.660670 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qh29t" event={"ID":"52d5a09f-33dd-49cf-9a31-a21d73a43b86","Type":"ContainerStarted","Data":"c00ff8d4a75ccaaaad28d0a38b92e55dce1ebb4576e8e7aef8057a40df458b3b"} Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.660725 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qh29t" event={"ID":"52d5a09f-33dd-49cf-9a31-a21d73a43b86","Type":"ContainerStarted","Data":"b61328a374915c00a61741b36d0de944f0cbd3fb4e900ff90643dd9e298dedf6"} Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.666562 4808 generic.go:334] "Generic (PLEG): container finished" podID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerID="b60fbde46c6075a50ace4cd1663669a692d98861f29087030c80fceb181a0f6f" exitCode=0 Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.666635 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" event={"ID":"27d2df02-b7e7-4fe9-a125-5a6acf093c85","Type":"ContainerDied","Data":"b60fbde46c6075a50ace4cd1663669a692d98861f29087030c80fceb181a0f6f"} Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.666663 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" event={"ID":"27d2df02-b7e7-4fe9-a125-5a6acf093c85","Type":"ContainerStarted","Data":"d63637f01ebacc82cd0cd4fa9f1b31ac08b1e5040c4e16549d0faa344661b80a"} Feb 17 16:13:18 crc kubenswrapper[4808]: I0217 16:13:18.707712 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-qh29t" podStartSLOduration=2.707693255 podStartE2EDuration="2.707693255s" podCreationTimestamp="2026-02-17 16:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:18.701458176 +0000 UTC m=+1162.217817249" watchObservedRunningTime="2026-02-17 16:13:18.707693255 +0000 UTC m=+1162.224052328" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.102039 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.157229 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-ovsdbserver-sb\") pod \"b1602c17-564b-482f-b5cc-cadd68ec07da\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.157352 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-config\") pod \"b1602c17-564b-482f-b5cc-cadd68ec07da\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.157417 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jllqw\" (UniqueName: \"kubernetes.io/projected/b1602c17-564b-482f-b5cc-cadd68ec07da-kube-api-access-jllqw\") pod \"b1602c17-564b-482f-b5cc-cadd68ec07da\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.157459 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-dns-svc\") pod \"b1602c17-564b-482f-b5cc-cadd68ec07da\" (UID: \"b1602c17-564b-482f-b5cc-cadd68ec07da\") " Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.164252 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bac5f26b-ff81-49e2-854f-9cad23a57593" path="/var/lib/kubelet/pods/bac5f26b-ff81-49e2-854f-9cad23a57593/volumes" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.275929 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1602c17-564b-482f-b5cc-cadd68ec07da-kube-api-access-jllqw" (OuterVolumeSpecName: "kube-api-access-jllqw") pod "b1602c17-564b-482f-b5cc-cadd68ec07da" (UID: "b1602c17-564b-482f-b5cc-cadd68ec07da"). InnerVolumeSpecName "kube-api-access-jllqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.364000 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jllqw\" (UniqueName: \"kubernetes.io/projected/b1602c17-564b-482f-b5cc-cadd68ec07da-kube-api-access-jllqw\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.474241 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b1602c17-564b-482f-b5cc-cadd68ec07da" (UID: "b1602c17-564b-482f-b5cc-cadd68ec07da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.492412 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-config" (OuterVolumeSpecName: "config") pod "b1602c17-564b-482f-b5cc-cadd68ec07da" (UID: "b1602c17-564b-482f-b5cc-cadd68ec07da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.495933 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1602c17-564b-482f-b5cc-cadd68ec07da" (UID: "b1602c17-564b-482f-b5cc-cadd68ec07da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.566674 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.566724 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.566733 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1602c17-564b-482f-b5cc-cadd68ec07da-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.694875 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" event={"ID":"27d2df02-b7e7-4fe9-a125-5a6acf093c85","Type":"ContainerStarted","Data":"8e5f6f7a728607504ca216d406d1d8a535d1573f6c6ba0a924dbe399f84dae18"} Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.695562 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.696985 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.697039 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-7j4gd" event={"ID":"b1602c17-564b-482f-b5cc-cadd68ec07da","Type":"ContainerDied","Data":"f063161929d5787f6080c4e7e94d4ec9783c2f61bba4d4ed1ee08f2ebd980f2e"} Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.697085 4808 scope.go:117] "RemoveContainer" containerID="02fe4733904170d6ff8ca546ae278d5400ac1f6b5e0058e060083b8b17f2a502" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.698791 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a020d38c-5e24-4266-96dc-9050e4d82f46","Type":"ContainerStarted","Data":"63d14012fa7e0d1db45466cd7673614d41b6384d4b8d5ab46a11ce8b71cfbb93"} Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.708053 4808 generic.go:334] "Generic (PLEG): container finished" podID="56f9931d-b010-4282-9068-16b2e4e4b247" containerID="eaab0a6bfd8b2f49bb5b0419ebf83f83f3a7d7db298ba6d150f0ad5ee4951a2a" exitCode=0 Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.708121 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"56f9931d-b010-4282-9068-16b2e4e4b247","Type":"ContainerDied","Data":"eaab0a6bfd8b2f49bb5b0419ebf83f83f3a7d7db298ba6d150f0ad5ee4951a2a"} Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.719034 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ade81c90-5cdf-45d4-ad2f-52a3514e1596","Type":"ContainerStarted","Data":"0ea8527a371975975278f77fbada0061706f8832d74429f7bac385a21fce660f"} Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.723393 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" podStartSLOduration=2.723371525 podStartE2EDuration="2.723371525s" podCreationTimestamp="2026-02-17 16:13:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:19.713720193 +0000 UTC m=+1163.230079266" watchObservedRunningTime="2026-02-17 16:13:19.723371525 +0000 UTC m=+1163.239730608" Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.833260 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7j4gd"] Feb 17 16:13:19 crc kubenswrapper[4808]: I0217 16:13:19.841934 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-7j4gd"] Feb 17 16:13:20 crc kubenswrapper[4808]: I0217 16:13:20.728654 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"2ea38754-3b00-4bcb-93d9-28b60dda0e0a","Type":"ContainerStarted","Data":"4394d899179994d78a4e42db6f34ea90e3d9c5f609acb5be4ecfd05118f69bbf"} Feb 17 16:13:20 crc kubenswrapper[4808]: I0217 16:13:20.728944 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 17 16:13:20 crc kubenswrapper[4808]: I0217 16:13:20.731269 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerStarted","Data":"2fc63ca226fc458b6690177cc943e7e0ca56b5c8e5a076cf9854b9dccf7b50f0"} Feb 17 16:13:20 crc kubenswrapper[4808]: I0217 16:13:20.747603 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.315505518 podStartE2EDuration="46.747585626s" podCreationTimestamp="2026-02-17 16:12:34 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.139427626 +0000 UTC m=+1133.655786699" lastFinishedPulling="2026-02-17 16:13:19.571507734 +0000 UTC m=+1163.087866807" observedRunningTime="2026-02-17 16:13:20.743867335 +0000 UTC m=+1164.260226408" watchObservedRunningTime="2026-02-17 16:13:20.747585626 +0000 UTC m=+1164.263944699" Feb 17 16:13:21 crc kubenswrapper[4808]: I0217 16:13:21.167628 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1602c17-564b-482f-b5cc-cadd68ec07da" path="/var/lib/kubelet/pods/b1602c17-564b-482f-b5cc-cadd68ec07da/volumes" Feb 17 16:13:22 crc kubenswrapper[4808]: I0217 16:13:22.753219 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"56f9931d-b010-4282-9068-16b2e4e4b247","Type":"ContainerStarted","Data":"98654911aaf83ce6cc519f041d3a0e10f34536f058c65db77bda34adf754d38f"} Feb 17 16:13:23 crc kubenswrapper[4808]: I0217 16:13:23.769421 4808 generic.go:334] "Generic (PLEG): container finished" podID="ade81c90-5cdf-45d4-ad2f-52a3514e1596" containerID="0ea8527a371975975278f77fbada0061706f8832d74429f7bac385a21fce660f" exitCode=0 Feb 17 16:13:23 crc kubenswrapper[4808]: I0217 16:13:23.769654 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ade81c90-5cdf-45d4-ad2f-52a3514e1596","Type":"ContainerDied","Data":"0ea8527a371975975278f77fbada0061706f8832d74429f7bac385a21fce660f"} Feb 17 16:13:23 crc kubenswrapper[4808]: I0217 16:13:23.774269 4808 generic.go:334] "Generic (PLEG): container finished" podID="a020d38c-5e24-4266-96dc-9050e4d82f46" containerID="63d14012fa7e0d1db45466cd7673614d41b6384d4b8d5ab46a11ce8b71cfbb93" exitCode=0 Feb 17 16:13:23 crc kubenswrapper[4808]: I0217 16:13:23.774310 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a020d38c-5e24-4266-96dc-9050e4d82f46","Type":"ContainerDied","Data":"63d14012fa7e0d1db45466cd7673614d41b6384d4b8d5ab46a11ce8b71cfbb93"} Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.783520 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a020d38c-5e24-4266-96dc-9050e4d82f46","Type":"ContainerStarted","Data":"a394837335a0eb508b22e180b1e69e1e33f3eda577ba4224fd4c1b14c7ac5119"} Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.785538 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ade81c90-5cdf-45d4-ad2f-52a3514e1596","Type":"ContainerStarted","Data":"0588f71e9a5f5fc1b883f656058d8cc65fead8be7fab00b0f6048fb1284601c0"} Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.787191 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8c434a76-4dcf-4c69-aefa-5cda8b120a26","Type":"ContainerStarted","Data":"bb093a9a448d6b27086896cfe5e9ec8580c0bc815915eb5536a1f7c2a75e71df"} Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.806420 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=24.475376049 podStartE2EDuration="53.80640645s" podCreationTimestamp="2026-02-17 16:12:31 +0000 UTC" firstStartedPulling="2026-02-17 16:12:49.412659639 +0000 UTC m=+1132.929018722" lastFinishedPulling="2026-02-17 16:13:18.74369005 +0000 UTC m=+1162.260049123" observedRunningTime="2026-02-17 16:13:24.805626989 +0000 UTC m=+1168.321986062" watchObservedRunningTime="2026-02-17 16:13:24.80640645 +0000 UTC m=+1168.322765523" Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.829474 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.667675566 podStartE2EDuration="44.829454255s" podCreationTimestamp="2026-02-17 16:12:40 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.515682414 +0000 UTC m=+1134.032041487" lastFinishedPulling="2026-02-17 16:13:23.677461103 +0000 UTC m=+1167.193820176" observedRunningTime="2026-02-17 16:13:24.825859317 +0000 UTC m=+1168.342218390" watchObservedRunningTime="2026-02-17 16:13:24.829454255 +0000 UTC m=+1168.345813328" Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.854209 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.094490593 podStartE2EDuration="51.854183684s" podCreationTimestamp="2026-02-17 16:12:33 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.148629176 +0000 UTC m=+1133.664988249" lastFinishedPulling="2026-02-17 16:13:18.908322277 +0000 UTC m=+1162.424681340" observedRunningTime="2026-02-17 16:13:24.847098232 +0000 UTC m=+1168.363457345" watchObservedRunningTime="2026-02-17 16:13:24.854183684 +0000 UTC m=+1168.370542777" Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.894168 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 17 16:13:24 crc kubenswrapper[4808]: I0217 16:13:24.896691 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 17 16:13:25 crc kubenswrapper[4808]: I0217 16:13:25.216449 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 17 16:13:25 crc kubenswrapper[4808]: I0217 16:13:25.797530 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a2bf674-1881-41e9-9c0f-93e8f14ac222","Type":"ContainerStarted","Data":"b8838c518fb8b535c043a526b61b1b74b26af147fff1399fef7427934840abb3"} Feb 17 16:13:25 crc kubenswrapper[4808]: I0217 16:13:25.798264 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 16:13:25 crc kubenswrapper[4808]: I0217 16:13:25.800753 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"56f9931d-b010-4282-9068-16b2e4e4b247","Type":"ContainerStarted","Data":"03937f48577de4a835f7ff8c33ce25fd6b70916328f18305898cd5ad82b45276"} Feb 17 16:13:25 crc kubenswrapper[4808]: I0217 16:13:25.818130 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.631862538 podStartE2EDuration="49.818108212s" podCreationTimestamp="2026-02-17 16:12:36 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.368416416 +0000 UTC m=+1133.884775489" lastFinishedPulling="2026-02-17 16:13:25.55466205 +0000 UTC m=+1169.071021163" observedRunningTime="2026-02-17 16:13:25.811838593 +0000 UTC m=+1169.328197686" watchObservedRunningTime="2026-02-17 16:13:25.818108212 +0000 UTC m=+1169.334467285" Feb 17 16:13:25 crc kubenswrapper[4808]: I0217 16:13:25.839135 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=16.912500915 podStartE2EDuration="48.839110982s" podCreationTimestamp="2026-02-17 16:12:37 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.15433783 +0000 UTC m=+1133.670696903" lastFinishedPulling="2026-02-17 16:13:22.080947887 +0000 UTC m=+1165.597306970" observedRunningTime="2026-02-17 16:13:25.835034841 +0000 UTC m=+1169.351393944" watchObservedRunningTime="2026-02-17 16:13:25.839110982 +0000 UTC m=+1169.355470065" Feb 17 16:13:26 crc kubenswrapper[4808]: I0217 16:13:26.624912 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 17 16:13:26 crc kubenswrapper[4808]: I0217 16:13:26.625124 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 17 16:13:26 crc kubenswrapper[4808]: I0217 16:13:26.817392 4808 generic.go:334] "Generic (PLEG): container finished" podID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerID="2fc63ca226fc458b6690177cc943e7e0ca56b5c8e5a076cf9854b9dccf7b50f0" exitCode=0 Feb 17 16:13:26 crc kubenswrapper[4808]: I0217 16:13:26.818262 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerDied","Data":"2fc63ca226fc458b6690177cc943e7e0ca56b5c8e5a076cf9854b9dccf7b50f0"} Feb 17 16:13:26 crc kubenswrapper[4808]: I0217 16:13:26.819097 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 17 16:13:26 crc kubenswrapper[4808]: I0217 16:13:26.822608 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.048672 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-zfhfg" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.311750 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v8hvr"] Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.311984 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerName="dnsmasq-dns" containerID="cri-o://8e5f6f7a728607504ca216d406d1d8a535d1573f6c6ba0a924dbe399f84dae18" gracePeriod=10 Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.314865 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.366892 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.374656 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-pq8qq"] Feb 17 16:13:27 crc kubenswrapper[4808]: E0217 16:13:27.375008 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1602c17-564b-482f-b5cc-cadd68ec07da" containerName="init" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.375024 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1602c17-564b-482f-b5cc-cadd68ec07da" containerName="init" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.375185 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1602c17-564b-482f-b5cc-cadd68ec07da" containerName="init" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.376044 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.407888 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pq8qq"] Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.522719 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-config\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.522857 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-dns-svc\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.522922 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.522966 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l9h5\" (UniqueName: \"kubernetes.io/projected/317e56c8-5f01-4313-a632-12ccaccf9442-kube-api-access-2l9h5\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.523013 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.633165 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-config\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.634363 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-config\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.643520 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-dns-svc\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.643652 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.643769 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l9h5\" (UniqueName: \"kubernetes.io/projected/317e56c8-5f01-4313-a632-12ccaccf9442-kube-api-access-2l9h5\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.647801 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-dns-svc\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.651872 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.656454 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.658878 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.697641 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l9h5\" (UniqueName: \"kubernetes.io/projected/317e56c8-5f01-4313-a632-12ccaccf9442-kube-api-access-2l9h5\") pod \"dnsmasq-dns-698758b865-pq8qq\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.834732 4808 generic.go:334] "Generic (PLEG): container finished" podID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerID="8e5f6f7a728607504ca216d406d1d8a535d1573f6c6ba0a924dbe399f84dae18" exitCode=0 Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.834793 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" event={"ID":"27d2df02-b7e7-4fe9-a125-5a6acf093c85","Type":"ContainerDied","Data":"8e5f6f7a728607504ca216d406d1d8a535d1573f6c6ba0a924dbe399f84dae18"} Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.834844 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" event={"ID":"27d2df02-b7e7-4fe9-a125-5a6acf093c85","Type":"ContainerDied","Data":"d63637f01ebacc82cd0cd4fa9f1b31ac08b1e5040c4e16549d0faa344661b80a"} Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.834855 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d63637f01ebacc82cd0cd4fa9f1b31ac08b1e5040c4e16549d0faa344661b80a" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.843954 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.964293 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-dns-svc\") pod \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.964401 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rj9g\" (UniqueName: \"kubernetes.io/projected/27d2df02-b7e7-4fe9-a125-5a6acf093c85-kube-api-access-6rj9g\") pod \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.964458 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-config\") pod \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.964528 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-sb\") pod \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.964694 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-nb\") pod \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\" (UID: \"27d2df02-b7e7-4fe9-a125-5a6acf093c85\") " Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.970480 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27d2df02-b7e7-4fe9-a125-5a6acf093c85-kube-api-access-6rj9g" (OuterVolumeSpecName: "kube-api-access-6rj9g") pod "27d2df02-b7e7-4fe9-a125-5a6acf093c85" (UID: "27d2df02-b7e7-4fe9-a125-5a6acf093c85"). InnerVolumeSpecName "kube-api-access-6rj9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:27 crc kubenswrapper[4808]: I0217 16:13:27.995797 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.012652 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "27d2df02-b7e7-4fe9-a125-5a6acf093c85" (UID: "27d2df02-b7e7-4fe9-a125-5a6acf093c85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.023699 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "27d2df02-b7e7-4fe9-a125-5a6acf093c85" (UID: "27d2df02-b7e7-4fe9-a125-5a6acf093c85"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.027193 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "27d2df02-b7e7-4fe9-a125-5a6acf093c85" (UID: "27d2df02-b7e7-4fe9-a125-5a6acf093c85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.040462 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-config" (OuterVolumeSpecName: "config") pod "27d2df02-b7e7-4fe9-a125-5a6acf093c85" (UID: "27d2df02-b7e7-4fe9-a125-5a6acf093c85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.066429 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.066456 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.066467 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rj9g\" (UniqueName: \"kubernetes.io/projected/27d2df02-b7e7-4fe9-a125-5a6acf093c85-kube-api-access-6rj9g\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.066476 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.066486 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d2df02-b7e7-4fe9-a125-5a6acf093c85-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.288854 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="c7929d5b-e791-419e-8039-50cc9f8202f2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.304752 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.390379 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.406562 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:13:28 crc kubenswrapper[4808]: E0217 16:13:28.407248 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerName="init" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.407365 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerName="init" Feb 17 16:13:28 crc kubenswrapper[4808]: E0217 16:13:28.407498 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerName="dnsmasq-dns" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.407883 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerName="dnsmasq-dns" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.408196 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerName="dnsmasq-dns" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.415981 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.446486 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.447476 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.448882 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.452963 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dqpkp" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.517457 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.540878 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pq8qq"] Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.599176 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8f52ebe4-f003-4d0b-8539-1d406db95b2f-cache\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.599435 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.599460 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8f52ebe4-f003-4d0b-8539-1d406db95b2f-lock\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.599554 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.599613 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f52ebe4-f003-4d0b-8539-1d406db95b2f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.599641 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hl7b\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-kube-api-access-6hl7b\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.700991 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.701029 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8f52ebe4-f003-4d0b-8539-1d406db95b2f-lock\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.701075 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.701098 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f52ebe4-f003-4d0b-8539-1d406db95b2f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.701118 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hl7b\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-kube-api-access-6hl7b\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.701149 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8f52ebe4-f003-4d0b-8539-1d406db95b2f-cache\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: E0217 16:13:28.701255 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:13:28 crc kubenswrapper[4808]: E0217 16:13:28.701294 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:13:28 crc kubenswrapper[4808]: E0217 16:13:28.701361 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift podName:8f52ebe4-f003-4d0b-8539-1d406db95b2f nodeName:}" failed. No retries permitted until 2026-02-17 16:13:29.201337968 +0000 UTC m=+1172.717697121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift") pod "swift-storage-0" (UID: "8f52ebe4-f003-4d0b-8539-1d406db95b2f") : configmap "swift-ring-files" not found Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.701522 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8f52ebe4-f003-4d0b-8539-1d406db95b2f-cache\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.701604 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8f52ebe4-f003-4d0b-8539-1d406db95b2f-lock\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.707643 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f52ebe4-f003-4d0b-8539-1d406db95b2f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.714804 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.714867 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/44b3cc468d8ea04f83345148f53d61bae2d04f9b0032f327344dd9c4f5b28475/globalmount\"" pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.722298 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hl7b\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-kube-api-access-6hl7b\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.758197 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7f7e85ae-97b7-4933-b91f-f2522cd6cf2e\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.843217 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pq8qq" event={"ID":"317e56c8-5f01-4313-a632-12ccaccf9442","Type":"ContainerStarted","Data":"ddfff32a5e606c9bd26b149ee55b24df69316a56d9a9ba2c7680c271a80e072c"} Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.843266 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.890895 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v8hvr"] Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.899565 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-v8hvr"] Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.979797 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-qg65w"] Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.980952 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.983144 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.983400 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.990323 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qg65w"] Feb 17 16:13:28 crc kubenswrapper[4808]: I0217 16:13:28.991076 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.110566 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/eb2856a7-c37a-4ecc-a4a2-c49864240315-etc-swift\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.110638 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-scripts\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.110663 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-ring-data-devices\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.110744 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-combined-ca-bundle\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.110771 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-swiftconf\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.110788 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vndk\" (UniqueName: \"kubernetes.io/projected/eb2856a7-c37a-4ecc-a4a2-c49864240315-kube-api-access-9vndk\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.110879 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-dispersionconf\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.159398 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" path="/var/lib/kubelet/pods/27d2df02-b7e7-4fe9-a125-5a6acf093c85/volumes" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.212850 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/eb2856a7-c37a-4ecc-a4a2-c49864240315-etc-swift\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.213433 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/eb2856a7-c37a-4ecc-a4a2-c49864240315-etc-swift\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.212951 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-scripts\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.213515 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-ring-data-devices\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.213607 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.213714 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-combined-ca-bundle\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.213825 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-swiftconf\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.213853 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vndk\" (UniqueName: \"kubernetes.io/projected/eb2856a7-c37a-4ecc-a4a2-c49864240315-kube-api-access-9vndk\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.213935 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-dispersionconf\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.214315 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-scripts\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.214409 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-ring-data-devices\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: E0217 16:13:29.214433 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:13:29 crc kubenswrapper[4808]: E0217 16:13:29.214447 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:13:29 crc kubenswrapper[4808]: E0217 16:13:29.214487 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift podName:8f52ebe4-f003-4d0b-8539-1d406db95b2f nodeName:}" failed. No retries permitted until 2026-02-17 16:13:30.214472791 +0000 UTC m=+1173.730831864 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift") pod "swift-storage-0" (UID: "8f52ebe4-f003-4d0b-8539-1d406db95b2f") : configmap "swift-ring-files" not found Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.218185 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-dispersionconf\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.219308 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-combined-ca-bundle\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.222719 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-swiftconf\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.239808 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vndk\" (UniqueName: \"kubernetes.io/projected/eb2856a7-c37a-4ecc-a4a2-c49864240315-kube-api-access-9vndk\") pod \"swift-ring-rebalance-qg65w\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.299634 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.538667 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.627485 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.678315 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.732061 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.808842 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-qg65w"] Feb 17 16:13:29 crc kubenswrapper[4808]: W0217 16:13:29.815960 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb2856a7_c37a_4ecc_a4a2_c49864240315.slice/crio-c158428c095eaa91f94460c1176f203740b31134ec5ab68c67c7165466a47208 WatchSource:0}: Error finding container c158428c095eaa91f94460c1176f203740b31134ec5ab68c67c7165466a47208: Status 404 returned error can't find the container with id c158428c095eaa91f94460c1176f203740b31134ec5ab68c67c7165466a47208 Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.852068 4808 generic.go:334] "Generic (PLEG): container finished" podID="317e56c8-5f01-4313-a632-12ccaccf9442" containerID="05efd9fb2a30652e1a674ecb739d46dca429eecdc2a90da4de03961953c36078" exitCode=0 Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.852129 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pq8qq" event={"ID":"317e56c8-5f01-4313-a632-12ccaccf9442","Type":"ContainerDied","Data":"05efd9fb2a30652e1a674ecb739d46dca429eecdc2a90da4de03961953c36078"} Feb 17 16:13:29 crc kubenswrapper[4808]: I0217 16:13:29.854357 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qg65w" event={"ID":"eb2856a7-c37a-4ecc-a4a2-c49864240315","Type":"ContainerStarted","Data":"c158428c095eaa91f94460c1176f203740b31134ec5ab68c67c7165466a47208"} Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.025594 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.027414 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.037194 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-677jx" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.037440 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.037672 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.037906 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.051105 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.131899 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79b7a04d-f324-40d0-ad2b-370cfef43858-scripts\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.132109 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.132280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zncdh\" (UniqueName: \"kubernetes.io/projected/79b7a04d-f324-40d0-ad2b-370cfef43858-kube-api-access-zncdh\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.132464 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.132816 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b7a04d-f324-40d0-ad2b-370cfef43858-config\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.133524 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.133690 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79b7a04d-f324-40d0-ad2b-370cfef43858-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238443 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b7a04d-f324-40d0-ad2b-370cfef43858-config\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238584 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238619 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79b7a04d-f324-40d0-ad2b-370cfef43858-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238712 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79b7a04d-f324-40d0-ad2b-370cfef43858-scripts\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238736 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238756 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zncdh\" (UniqueName: \"kubernetes.io/projected/79b7a04d-f324-40d0-ad2b-370cfef43858-kube-api-access-zncdh\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238775 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.238821 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:30 crc kubenswrapper[4808]: E0217 16:13:30.238995 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:13:30 crc kubenswrapper[4808]: E0217 16:13:30.239009 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:13:30 crc kubenswrapper[4808]: E0217 16:13:30.239057 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift podName:8f52ebe4-f003-4d0b-8539-1d406db95b2f nodeName:}" failed. No retries permitted until 2026-02-17 16:13:32.239042582 +0000 UTC m=+1175.755401655 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift") pod "swift-storage-0" (UID: "8f52ebe4-f003-4d0b-8539-1d406db95b2f") : configmap "swift-ring-files" not found Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.239862 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/79b7a04d-f324-40d0-ad2b-370cfef43858-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.243372 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.245036 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79b7a04d-f324-40d0-ad2b-370cfef43858-config\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.250659 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.252979 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79b7a04d-f324-40d0-ad2b-370cfef43858-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.260294 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/79b7a04d-f324-40d0-ad2b-370cfef43858-scripts\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.263145 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zncdh\" (UniqueName: \"kubernetes.io/projected/79b7a04d-f324-40d0-ad2b-370cfef43858-kube-api-access-zncdh\") pod \"ovn-northd-0\" (UID: \"79b7a04d-f324-40d0-ad2b-370cfef43858\") " pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.373925 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.865269 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pq8qq" event={"ID":"317e56c8-5f01-4313-a632-12ccaccf9442","Type":"ContainerStarted","Data":"5bbec6100cf7c3218bd24bc7371072ff178631d539a209a85ec99f4282aadb9a"} Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.867394 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.900985 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 17 16:13:30 crc kubenswrapper[4808]: I0217 16:13:30.909176 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-pq8qq" podStartSLOduration=3.909157695 podStartE2EDuration="3.909157695s" podCreationTimestamp="2026-02-17 16:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:30.888008552 +0000 UTC m=+1174.404367625" watchObservedRunningTime="2026-02-17 16:13:30.909157695 +0000 UTC m=+1174.425516768" Feb 17 16:13:31 crc kubenswrapper[4808]: I0217 16:13:31.877502 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"79b7a04d-f324-40d0-ad2b-370cfef43858","Type":"ContainerStarted","Data":"94459071397bab42a5432e97d2a82ed90d6a1670865721bd5f60b89b0be2a2ed"} Feb 17 16:13:32 crc kubenswrapper[4808]: I0217 16:13:32.283771 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:32 crc kubenswrapper[4808]: E0217 16:13:32.284007 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:13:32 crc kubenswrapper[4808]: E0217 16:13:32.284297 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:13:32 crc kubenswrapper[4808]: E0217 16:13:32.284366 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift podName:8f52ebe4-f003-4d0b-8539-1d406db95b2f nodeName:}" failed. No retries permitted until 2026-02-17 16:13:36.284346009 +0000 UTC m=+1179.800705082 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift") pod "swift-storage-0" (UID: "8f52ebe4-f003-4d0b-8539-1d406db95b2f") : configmap "swift-ring-files" not found Feb 17 16:13:32 crc kubenswrapper[4808]: I0217 16:13:32.709591 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-v8hvr" podUID="27d2df02-b7e7-4fe9-a125-5a6acf093c85" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.241672 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.241798 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.325969 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.612953 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-l2f2z"] Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.614039 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.623102 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.623401 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l2f2z"] Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.708199 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wnbd\" (UniqueName: \"kubernetes.io/projected/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-kube-api-access-8wnbd\") pod \"root-account-create-update-l2f2z\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.708300 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-operator-scripts\") pod \"root-account-create-update-l2f2z\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.810794 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wnbd\" (UniqueName: \"kubernetes.io/projected/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-kube-api-access-8wnbd\") pod \"root-account-create-update-l2f2z\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.810878 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-operator-scripts\") pod \"root-account-create-update-l2f2z\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.824014 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-operator-scripts\") pod \"root-account-create-update-l2f2z\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.830773 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wnbd\" (UniqueName: \"kubernetes.io/projected/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-kube-api-access-8wnbd\") pod \"root-account-create-update-l2f2z\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.939969 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:33 crc kubenswrapper[4808]: I0217 16:13:33.962288 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.362008 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-cw2fg"] Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.365306 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.371711 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cw2fg"] Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.478618 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1c2d-account-create-update-5rmst"] Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.486074 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1c2d-account-create-update-5rmst"] Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.486186 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.489220 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.558044 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8885\" (UniqueName: \"kubernetes.io/projected/850baae5-89be-441f-85e0-f2f0ec68bdc3-kube-api-access-b8885\") pod \"glance-db-create-cw2fg\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.558348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850baae5-89be-441f-85e0-f2f0ec68bdc3-operator-scripts\") pod \"glance-db-create-cw2fg\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.659848 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850baae5-89be-441f-85e0-f2f0ec68bdc3-operator-scripts\") pod \"glance-db-create-cw2fg\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.660293 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbacbd93-bbc0-4360-bc45-9782988bd3c0-operator-scripts\") pod \"glance-1c2d-account-create-update-5rmst\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.660481 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm29j\" (UniqueName: \"kubernetes.io/projected/dbacbd93-bbc0-4360-bc45-9782988bd3c0-kube-api-access-hm29j\") pod \"glance-1c2d-account-create-update-5rmst\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.660505 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8885\" (UniqueName: \"kubernetes.io/projected/850baae5-89be-441f-85e0-f2f0ec68bdc3-kube-api-access-b8885\") pod \"glance-db-create-cw2fg\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.662005 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850baae5-89be-441f-85e0-f2f0ec68bdc3-operator-scripts\") pod \"glance-db-create-cw2fg\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.694311 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8885\" (UniqueName: \"kubernetes.io/projected/850baae5-89be-441f-85e0-f2f0ec68bdc3-kube-api-access-b8885\") pod \"glance-db-create-cw2fg\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.762316 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm29j\" (UniqueName: \"kubernetes.io/projected/dbacbd93-bbc0-4360-bc45-9782988bd3c0-kube-api-access-hm29j\") pod \"glance-1c2d-account-create-update-5rmst\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.762389 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbacbd93-bbc0-4360-bc45-9782988bd3c0-operator-scripts\") pod \"glance-1c2d-account-create-update-5rmst\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.765262 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbacbd93-bbc0-4360-bc45-9782988bd3c0-operator-scripts\") pod \"glance-1c2d-account-create-update-5rmst\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.779087 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm29j\" (UniqueName: \"kubernetes.io/projected/dbacbd93-bbc0-4360-bc45-9782988bd3c0-kube-api-access-hm29j\") pod \"glance-1c2d-account-create-update-5rmst\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.803075 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:35 crc kubenswrapper[4808]: I0217 16:13:35.994149 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.120533 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6mgt5"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.122289 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.131223 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6mgt5"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.222857 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1e92-account-create-update-s8tnj"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.224364 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.227223 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.230218 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1e92-account-create-update-s8tnj"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.288081 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngpd6\" (UniqueName: \"kubernetes.io/projected/7419b027-2686-4ba4-9459-30a4362d34f0-kube-api-access-ngpd6\") pod \"keystone-db-create-6mgt5\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.288327 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850d66dd-e985-408b-93a0-8251cfd8dbc5-operator-scripts\") pod \"keystone-1e92-account-create-update-s8tnj\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.288465 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.288526 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv5tr\" (UniqueName: \"kubernetes.io/projected/850d66dd-e985-408b-93a0-8251cfd8dbc5-kube-api-access-tv5tr\") pod \"keystone-1e92-account-create-update-s8tnj\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.288718 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7419b027-2686-4ba4-9459-30a4362d34f0-operator-scripts\") pod \"keystone-db-create-6mgt5\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: E0217 16:13:36.290181 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:13:36 crc kubenswrapper[4808]: E0217 16:13:36.290348 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:13:36 crc kubenswrapper[4808]: E0217 16:13:36.290424 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift podName:8f52ebe4-f003-4d0b-8539-1d406db95b2f nodeName:}" failed. No retries permitted until 2026-02-17 16:13:44.290405865 +0000 UTC m=+1187.806764938 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift") pod "swift-storage-0" (UID: "8f52ebe4-f003-4d0b-8539-1d406db95b2f") : configmap "swift-ring-files" not found Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.389836 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7419b027-2686-4ba4-9459-30a4362d34f0-operator-scripts\") pod \"keystone-db-create-6mgt5\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.389903 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngpd6\" (UniqueName: \"kubernetes.io/projected/7419b027-2686-4ba4-9459-30a4362d34f0-kube-api-access-ngpd6\") pod \"keystone-db-create-6mgt5\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.389952 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850d66dd-e985-408b-93a0-8251cfd8dbc5-operator-scripts\") pod \"keystone-1e92-account-create-update-s8tnj\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.390008 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv5tr\" (UniqueName: \"kubernetes.io/projected/850d66dd-e985-408b-93a0-8251cfd8dbc5-kube-api-access-tv5tr\") pod \"keystone-1e92-account-create-update-s8tnj\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.390930 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7419b027-2686-4ba4-9459-30a4362d34f0-operator-scripts\") pod \"keystone-db-create-6mgt5\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.391374 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850d66dd-e985-408b-93a0-8251cfd8dbc5-operator-scripts\") pod \"keystone-1e92-account-create-update-s8tnj\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.423848 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv5tr\" (UniqueName: \"kubernetes.io/projected/850d66dd-e985-408b-93a0-8251cfd8dbc5-kube-api-access-tv5tr\") pod \"keystone-1e92-account-create-update-s8tnj\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.430443 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngpd6\" (UniqueName: \"kubernetes.io/projected/7419b027-2686-4ba4-9459-30a4362d34f0-kube-api-access-ngpd6\") pod \"keystone-db-create-6mgt5\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.450170 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-mp9g8"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.451392 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.466717 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6fc9-account-create-update-hsl6c"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.468153 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.473852 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.482410 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.490949 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mp9g8"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.507622 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6fc9-account-create-update-hsl6c"] Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.542026 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.594816 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-operator-scripts\") pod \"placement-6fc9-account-create-update-hsl6c\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.595162 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v7cl\" (UniqueName: \"kubernetes.io/projected/56341195-0325-4b22-ba76-8f792fbbcdb6-kube-api-access-2v7cl\") pod \"placement-db-create-mp9g8\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.595269 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56341195-0325-4b22-ba76-8f792fbbcdb6-operator-scripts\") pod \"placement-db-create-mp9g8\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.595348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krhzh\" (UniqueName: \"kubernetes.io/projected/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-kube-api-access-krhzh\") pod \"placement-6fc9-account-create-update-hsl6c\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.697631 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krhzh\" (UniqueName: \"kubernetes.io/projected/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-kube-api-access-krhzh\") pod \"placement-6fc9-account-create-update-hsl6c\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.697739 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-operator-scripts\") pod \"placement-6fc9-account-create-update-hsl6c\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.697822 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v7cl\" (UniqueName: \"kubernetes.io/projected/56341195-0325-4b22-ba76-8f792fbbcdb6-kube-api-access-2v7cl\") pod \"placement-db-create-mp9g8\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.697895 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56341195-0325-4b22-ba76-8f792fbbcdb6-operator-scripts\") pod \"placement-db-create-mp9g8\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.699284 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-operator-scripts\") pod \"placement-6fc9-account-create-update-hsl6c\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.700463 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56341195-0325-4b22-ba76-8f792fbbcdb6-operator-scripts\") pod \"placement-db-create-mp9g8\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.715343 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v7cl\" (UniqueName: \"kubernetes.io/projected/56341195-0325-4b22-ba76-8f792fbbcdb6-kube-api-access-2v7cl\") pod \"placement-db-create-mp9g8\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.715712 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krhzh\" (UniqueName: \"kubernetes.io/projected/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-kube-api-access-krhzh\") pod \"placement-6fc9-account-create-update-hsl6c\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.824699 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:36 crc kubenswrapper[4808]: I0217 16:13:36.858159 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.192192 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1c2d-account-create-update-5rmst"] Feb 17 16:13:37 crc kubenswrapper[4808]: W0217 16:13:37.202366 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbacbd93_bbc0_4360_bc45_9782988bd3c0.slice/crio-fc073784c031cac98470bba284bdb32968853c4aeeff19e47471f3b9dbc91465 WatchSource:0}: Error finding container fc073784c031cac98470bba284bdb32968853c4aeeff19e47471f3b9dbc91465: Status 404 returned error can't find the container with id fc073784c031cac98470bba284bdb32968853c4aeeff19e47471f3b9dbc91465 Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.223731 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.289169 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-pkj8k" Feb 17 16:13:37 crc kubenswrapper[4808]: W0217 16:13:37.349236 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc5e9f09_05c9_4fa2_8e39_22ffa4fa8d2c.slice/crio-67e1d9e4beb27bf149e3172995f31de56d2719eb7b25ce4c319edba907379192 WatchSource:0}: Error finding container 67e1d9e4beb27bf149e3172995f31de56d2719eb7b25ce4c319edba907379192: Status 404 returned error can't find the container with id 67e1d9e4beb27bf149e3172995f31de56d2719eb7b25ce4c319edba907379192 Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.351547 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l2f2z"] Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.491363 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1e92-account-create-update-s8tnj"] Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.501838 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6mgt5"] Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.517317 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cw2fg"] Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.737674 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6fc9-account-create-update-hsl6c"] Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.747471 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mp9g8"] Feb 17 16:13:37 crc kubenswrapper[4808]: W0217 16:13:37.793458 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56341195_0325_4b22_ba76_8f792fbbcdb6.slice/crio-d1561dcdfaac7c99f53a2dd25dc15dd288466f9c31855a26306f9f871e78f225 WatchSource:0}: Error finding container d1561dcdfaac7c99f53a2dd25dc15dd288466f9c31855a26306f9f871e78f225: Status 404 returned error can't find the container with id d1561dcdfaac7c99f53a2dd25dc15dd288466f9c31855a26306f9f871e78f225 Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.929192 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6mgt5" event={"ID":"7419b027-2686-4ba4-9459-30a4362d34f0","Type":"ContainerStarted","Data":"313ac15ae60a5d599f6768b0198df4cac62283c718fe3fa77e1a4a039f74c3b9"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.929239 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6mgt5" event={"ID":"7419b027-2686-4ba4-9459-30a4362d34f0","Type":"ContainerStarted","Data":"c89dbe2cc7630ae1cc4dfb777a53044b9caf01f9b81ec512acbb427ca87dadf9"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.933623 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc9-account-create-update-hsl6c" event={"ID":"58e700c8-ab25-47a2-a6cf-e85ffcb57e74","Type":"ContainerStarted","Data":"ff8a1308f30cac05f4582dcef33e2089bd45ba7c33c330702b7e8ec8f4a48526"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.940248 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"79b7a04d-f324-40d0-ad2b-370cfef43858","Type":"ContainerStarted","Data":"a72850f8f00fd340022c4bb892c35b0149af790964fead3e49b61535eefcdf37"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.940294 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"79b7a04d-f324-40d0-ad2b-370cfef43858","Type":"ContainerStarted","Data":"bcf18da17ab80ed0879939884efc09d2733aac447dc222187451816e2f2f9d3f"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.941066 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.943153 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e92-account-create-update-s8tnj" event={"ID":"850d66dd-e985-408b-93a0-8251cfd8dbc5","Type":"ContainerStarted","Data":"285375d2088a10c12e0cc841d85c9fdfa40b8c2ff310c72a4cadbe5048c52b8c"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.948527 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qg65w" event={"ID":"eb2856a7-c37a-4ecc-a4a2-c49864240315","Type":"ContainerStarted","Data":"531cd6842c615f80a678de85ab5ffd56ce530c2a4ddaf1a8a62d7dbfe638cf33"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.950468 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l2f2z" event={"ID":"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c","Type":"ContainerStarted","Data":"67e1d9e4beb27bf149e3172995f31de56d2719eb7b25ce4c319edba907379192"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.951416 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mp9g8" event={"ID":"56341195-0325-4b22-ba76-8f792fbbcdb6","Type":"ContainerStarted","Data":"d1561dcdfaac7c99f53a2dd25dc15dd288466f9c31855a26306f9f871e78f225"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.952350 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cw2fg" event={"ID":"850baae5-89be-441f-85e0-f2f0ec68bdc3","Type":"ContainerStarted","Data":"590c5689226b24e8a79cadbae587b15db602a7fa85141bb00ffbdcd1faf2d3ef"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.955190 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-6mgt5" podStartSLOduration=1.955171259 podStartE2EDuration="1.955171259s" podCreationTimestamp="2026-02-17 16:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:37.944562622 +0000 UTC m=+1181.460921695" watchObservedRunningTime="2026-02-17 16:13:37.955171259 +0000 UTC m=+1181.471530342" Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.957197 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerStarted","Data":"4b0c39d37d11b4b4e6ab329ec7e07436445d5087b94a405b5022cc84ee9f2693"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.967452 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c2d-account-create-update-5rmst" event={"ID":"dbacbd93-bbc0-4360-bc45-9782988bd3c0","Type":"ContainerStarted","Data":"8bbf45c20da63316a7d1a31fef41a55e4272d4200c5d0a86c7aa340258751589"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.967503 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c2d-account-create-update-5rmst" event={"ID":"dbacbd93-bbc0-4360-bc45-9782988bd3c0","Type":"ContainerStarted","Data":"fc073784c031cac98470bba284bdb32968853c4aeeff19e47471f3b9dbc91465"} Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.969478 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.976461773 podStartE2EDuration="7.969459136s" podCreationTimestamp="2026-02-17 16:13:30 +0000 UTC" firstStartedPulling="2026-02-17 16:13:30.917313726 +0000 UTC m=+1174.433672799" lastFinishedPulling="2026-02-17 16:13:36.910311089 +0000 UTC m=+1180.426670162" observedRunningTime="2026-02-17 16:13:37.960473943 +0000 UTC m=+1181.476833026" watchObservedRunningTime="2026-02-17 16:13:37.969459136 +0000 UTC m=+1181.485818209" Feb 17 16:13:37 crc kubenswrapper[4808]: I0217 16:13:37.997756 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.008031 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-qg65w" podStartSLOduration=3.056000352 podStartE2EDuration="10.00800574s" podCreationTimestamp="2026-02-17 16:13:28 +0000 UTC" firstStartedPulling="2026-02-17 16:13:29.818382212 +0000 UTC m=+1173.334741285" lastFinishedPulling="2026-02-17 16:13:36.7703876 +0000 UTC m=+1180.286746673" observedRunningTime="2026-02-17 16:13:37.98401171 +0000 UTC m=+1181.500370803" watchObservedRunningTime="2026-02-17 16:13:38.00800574 +0000 UTC m=+1181.524364833" Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.012345 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-1c2d-account-create-update-5rmst" podStartSLOduration=3.012326606 podStartE2EDuration="3.012326606s" podCreationTimestamp="2026-02-17 16:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:37.998158383 +0000 UTC m=+1181.514517466" watchObservedRunningTime="2026-02-17 16:13:38.012326606 +0000 UTC m=+1181.528685699" Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.061683 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wrzq"] Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.066039 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" podUID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerName="dnsmasq-dns" containerID="cri-o://3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce" gracePeriod=10 Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.221740 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="c7929d5b-e791-419e-8039-50cc9f8202f2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.864085 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.948525 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdqm8\" (UniqueName: \"kubernetes.io/projected/24cc6fe1-da44-4d61-98bf-3088b398903b-kube-api-access-zdqm8\") pod \"24cc6fe1-da44-4d61-98bf-3088b398903b\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.948645 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-dns-svc\") pod \"24cc6fe1-da44-4d61-98bf-3088b398903b\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.948728 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-config\") pod \"24cc6fe1-da44-4d61-98bf-3088b398903b\" (UID: \"24cc6fe1-da44-4d61-98bf-3088b398903b\") " Feb 17 16:13:38 crc kubenswrapper[4808]: I0217 16:13:38.962877 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24cc6fe1-da44-4d61-98bf-3088b398903b-kube-api-access-zdqm8" (OuterVolumeSpecName: "kube-api-access-zdqm8") pod "24cc6fe1-da44-4d61-98bf-3088b398903b" (UID: "24cc6fe1-da44-4d61-98bf-3088b398903b"). InnerVolumeSpecName "kube-api-access-zdqm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.051541 4808 generic.go:334] "Generic (PLEG): container finished" podID="7419b027-2686-4ba4-9459-30a4362d34f0" containerID="313ac15ae60a5d599f6768b0198df4cac62283c718fe3fa77e1a4a039f74c3b9" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.051660 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6mgt5" event={"ID":"7419b027-2686-4ba4-9459-30a4362d34f0","Type":"ContainerDied","Data":"313ac15ae60a5d599f6768b0198df4cac62283c718fe3fa77e1a4a039f74c3b9"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.062827 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdqm8\" (UniqueName: \"kubernetes.io/projected/24cc6fe1-da44-4d61-98bf-3088b398903b-kube-api-access-zdqm8\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.074501 4808 generic.go:334] "Generic (PLEG): container finished" podID="56341195-0325-4b22-ba76-8f792fbbcdb6" containerID="77cbcade43f0ae77b54c73845bcb62b81d16918f6513db83061d64f348ec9b2b" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.074604 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mp9g8" event={"ID":"56341195-0325-4b22-ba76-8f792fbbcdb6","Type":"ContainerDied","Data":"77cbcade43f0ae77b54c73845bcb62b81d16918f6513db83061d64f348ec9b2b"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.077375 4808 generic.go:334] "Generic (PLEG): container finished" podID="dbacbd93-bbc0-4360-bc45-9782988bd3c0" containerID="8bbf45c20da63316a7d1a31fef41a55e4272d4200c5d0a86c7aa340258751589" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.077426 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c2d-account-create-update-5rmst" event={"ID":"dbacbd93-bbc0-4360-bc45-9782988bd3c0","Type":"ContainerDied","Data":"8bbf45c20da63316a7d1a31fef41a55e4272d4200c5d0a86c7aa340258751589"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.079681 4808 generic.go:334] "Generic (PLEG): container finished" podID="850baae5-89be-441f-85e0-f2f0ec68bdc3" containerID="d6c0e57ec0c9fe5da75d2c778f8867455af3d9bb73146a28181bca20e679417d" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.079732 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cw2fg" event={"ID":"850baae5-89be-441f-85e0-f2f0ec68bdc3","Type":"ContainerDied","Data":"d6c0e57ec0c9fe5da75d2c778f8867455af3d9bb73146a28181bca20e679417d"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.085359 4808 generic.go:334] "Generic (PLEG): container finished" podID="58e700c8-ab25-47a2-a6cf-e85ffcb57e74" containerID="92a52a548321e7e91228a92677db66adc649f3fd4be4a1f0b2dcb81c8ce95063" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.085441 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc9-account-create-update-hsl6c" event={"ID":"58e700c8-ab25-47a2-a6cf-e85ffcb57e74","Type":"ContainerDied","Data":"92a52a548321e7e91228a92677db66adc649f3fd4be4a1f0b2dcb81c8ce95063"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.105439 4808 generic.go:334] "Generic (PLEG): container finished" podID="850d66dd-e985-408b-93a0-8251cfd8dbc5" containerID="b9a6e75c4872c463e0bee7ea278256a76575233d65a1cb8980723a4259e57365" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.105497 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e92-account-create-update-s8tnj" event={"ID":"850d66dd-e985-408b-93a0-8251cfd8dbc5","Type":"ContainerDied","Data":"b9a6e75c4872c463e0bee7ea278256a76575233d65a1cb8980723a4259e57365"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.121761 4808 generic.go:334] "Generic (PLEG): container finished" podID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerID="3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.121936 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" event={"ID":"24cc6fe1-da44-4d61-98bf-3088b398903b","Type":"ContainerDied","Data":"3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.122688 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" event={"ID":"24cc6fe1-da44-4d61-98bf-3088b398903b","Type":"ContainerDied","Data":"4a7ab805f716d84e3d73f9394b1b45757927f27450dd37708e63205a258bb4f5"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.122023 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5wrzq" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.122720 4808 scope.go:117] "RemoveContainer" containerID="3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.131383 4808 generic.go:334] "Generic (PLEG): container finished" podID="bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c" containerID="7aea08d602941315a47910cfb8dca2a1ac4425726486c35b99c77739c12a5b14" exitCode=0 Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.132277 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l2f2z" event={"ID":"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c","Type":"ContainerDied","Data":"7aea08d602941315a47910cfb8dca2a1ac4425726486c35b99c77739c12a5b14"} Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.174445 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "24cc6fe1-da44-4d61-98bf-3088b398903b" (UID: "24cc6fe1-da44-4d61-98bf-3088b398903b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.179435 4808 scope.go:117] "RemoveContainer" containerID="5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.269647 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.289685 4808 scope.go:117] "RemoveContainer" containerID="3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce" Feb 17 16:13:39 crc kubenswrapper[4808]: E0217 16:13:39.290106 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce\": container with ID starting with 3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce not found: ID does not exist" containerID="3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.290130 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce"} err="failed to get container status \"3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce\": rpc error: code = NotFound desc = could not find container \"3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce\": container with ID starting with 3df2b6c8480475dff990f580da87d30f986cfab5664d5aa6987e96c0458e40ce not found: ID does not exist" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.290149 4808 scope.go:117] "RemoveContainer" containerID="5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d" Feb 17 16:13:39 crc kubenswrapper[4808]: E0217 16:13:39.290353 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d\": container with ID starting with 5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d not found: ID does not exist" containerID="5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.290367 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d"} err="failed to get container status \"5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d\": rpc error: code = NotFound desc = could not find container \"5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d\": container with ID starting with 5eef31ccf738b712b92d96f8cbf9367f57cb6ada66d559cdc21e7d0e94df0e1d not found: ID does not exist" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.298691 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-config" (OuterVolumeSpecName: "config") pod "24cc6fe1-da44-4d61-98bf-3088b398903b" (UID: "24cc6fe1-da44-4d61-98bf-3088b398903b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.372700 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24cc6fe1-da44-4d61-98bf-3088b398903b-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.458986 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wrzq"] Feb 17 16:13:39 crc kubenswrapper[4808]: I0217 16:13:39.473721 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5wrzq"] Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.527676 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.605298 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm29j\" (UniqueName: \"kubernetes.io/projected/dbacbd93-bbc0-4360-bc45-9782988bd3c0-kube-api-access-hm29j\") pod \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.605432 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbacbd93-bbc0-4360-bc45-9782988bd3c0-operator-scripts\") pod \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\" (UID: \"dbacbd93-bbc0-4360-bc45-9782988bd3c0\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.606665 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbacbd93-bbc0-4360-bc45-9782988bd3c0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dbacbd93-bbc0-4360-bc45-9782988bd3c0" (UID: "dbacbd93-bbc0-4360-bc45-9782988bd3c0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.613924 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbacbd93-bbc0-4360-bc45-9782988bd3c0-kube-api-access-hm29j" (OuterVolumeSpecName: "kube-api-access-hm29j") pod "dbacbd93-bbc0-4360-bc45-9782988bd3c0" (UID: "dbacbd93-bbc0-4360-bc45-9782988bd3c0"). InnerVolumeSpecName "kube-api-access-hm29j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.708978 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm29j\" (UniqueName: \"kubernetes.io/projected/dbacbd93-bbc0-4360-bc45-9782988bd3c0-kube-api-access-hm29j\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.709022 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbacbd93-bbc0-4360-bc45-9782988bd3c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.727868 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.734948 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.765024 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.810033 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v7cl\" (UniqueName: \"kubernetes.io/projected/56341195-0325-4b22-ba76-8f792fbbcdb6-kube-api-access-2v7cl\") pod \"56341195-0325-4b22-ba76-8f792fbbcdb6\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.810473 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-operator-scripts\") pod \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.810642 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wnbd\" (UniqueName: \"kubernetes.io/projected/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-kube-api-access-8wnbd\") pod \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\" (UID: \"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.810786 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56341195-0325-4b22-ba76-8f792fbbcdb6-operator-scripts\") pod \"56341195-0325-4b22-ba76-8f792fbbcdb6\" (UID: \"56341195-0325-4b22-ba76-8f792fbbcdb6\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.810919 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850d66dd-e985-408b-93a0-8251cfd8dbc5-operator-scripts\") pod \"850d66dd-e985-408b-93a0-8251cfd8dbc5\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.811747 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv5tr\" (UniqueName: \"kubernetes.io/projected/850d66dd-e985-408b-93a0-8251cfd8dbc5-kube-api-access-tv5tr\") pod \"850d66dd-e985-408b-93a0-8251cfd8dbc5\" (UID: \"850d66dd-e985-408b-93a0-8251cfd8dbc5\") " Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.813559 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pfcvm" podUID="8a76a2ff-ed1a-4279-898c-54e85973f024" containerName="ovn-controller" probeResult="failure" output=< Feb 17 16:13:40 crc kubenswrapper[4808]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 16:13:40 crc kubenswrapper[4808]: > Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.811083 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c" (UID: "bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.811602 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56341195-0325-4b22-ba76-8f792fbbcdb6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56341195-0325-4b22-ba76-8f792fbbcdb6" (UID: "56341195-0325-4b22-ba76-8f792fbbcdb6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.811627 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/850d66dd-e985-408b-93a0-8251cfd8dbc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "850d66dd-e985-408b-93a0-8251cfd8dbc5" (UID: "850d66dd-e985-408b-93a0-8251cfd8dbc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.822766 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-kube-api-access-8wnbd" (OuterVolumeSpecName: "kube-api-access-8wnbd") pod "bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c" (UID: "bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c"). InnerVolumeSpecName "kube-api-access-8wnbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.822874 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850d66dd-e985-408b-93a0-8251cfd8dbc5-kube-api-access-tv5tr" (OuterVolumeSpecName: "kube-api-access-tv5tr") pod "850d66dd-e985-408b-93a0-8251cfd8dbc5" (UID: "850d66dd-e985-408b-93a0-8251cfd8dbc5"). InnerVolumeSpecName "kube-api-access-tv5tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.839878 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56341195-0325-4b22-ba76-8f792fbbcdb6-kube-api-access-2v7cl" (OuterVolumeSpecName: "kube-api-access-2v7cl") pod "56341195-0325-4b22-ba76-8f792fbbcdb6" (UID: "56341195-0325-4b22-ba76-8f792fbbcdb6"). InnerVolumeSpecName "kube-api-access-2v7cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.915030 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v7cl\" (UniqueName: \"kubernetes.io/projected/56341195-0325-4b22-ba76-8f792fbbcdb6-kube-api-access-2v7cl\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.915391 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.915404 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wnbd\" (UniqueName: \"kubernetes.io/projected/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c-kube-api-access-8wnbd\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.915417 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56341195-0325-4b22-ba76-8f792fbbcdb6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.915428 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850d66dd-e985-408b-93a0-8251cfd8dbc5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.915440 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv5tr\" (UniqueName: \"kubernetes.io/projected/850d66dd-e985-408b-93a0-8251cfd8dbc5-kube-api-access-tv5tr\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.932317 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.986231 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:40 crc kubenswrapper[4808]: I0217 16:13:40.992958 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.016249 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngpd6\" (UniqueName: \"kubernetes.io/projected/7419b027-2686-4ba4-9459-30a4362d34f0-kube-api-access-ngpd6\") pod \"7419b027-2686-4ba4-9459-30a4362d34f0\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.016360 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7419b027-2686-4ba4-9459-30a4362d34f0-operator-scripts\") pod \"7419b027-2686-4ba4-9459-30a4362d34f0\" (UID: \"7419b027-2686-4ba4-9459-30a4362d34f0\") " Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.017118 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7419b027-2686-4ba4-9459-30a4362d34f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7419b027-2686-4ba4-9459-30a4362d34f0" (UID: "7419b027-2686-4ba4-9459-30a4362d34f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.019609 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7419b027-2686-4ba4-9459-30a4362d34f0-kube-api-access-ngpd6" (OuterVolumeSpecName: "kube-api-access-ngpd6") pod "7419b027-2686-4ba4-9459-30a4362d34f0" (UID: "7419b027-2686-4ba4-9459-30a4362d34f0"). InnerVolumeSpecName "kube-api-access-ngpd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.117425 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-operator-scripts\") pod \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.117567 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krhzh\" (UniqueName: \"kubernetes.io/projected/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-kube-api-access-krhzh\") pod \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\" (UID: \"58e700c8-ab25-47a2-a6cf-e85ffcb57e74\") " Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.117622 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8885\" (UniqueName: \"kubernetes.io/projected/850baae5-89be-441f-85e0-f2f0ec68bdc3-kube-api-access-b8885\") pod \"850baae5-89be-441f-85e0-f2f0ec68bdc3\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.117758 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850baae5-89be-441f-85e0-f2f0ec68bdc3-operator-scripts\") pod \"850baae5-89be-441f-85e0-f2f0ec68bdc3\" (UID: \"850baae5-89be-441f-85e0-f2f0ec68bdc3\") " Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.117962 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58e700c8-ab25-47a2-a6cf-e85ffcb57e74" (UID: "58e700c8-ab25-47a2-a6cf-e85ffcb57e74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.118220 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/850baae5-89be-441f-85e0-f2f0ec68bdc3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "850baae5-89be-441f-85e0-f2f0ec68bdc3" (UID: "850baae5-89be-441f-85e0-f2f0ec68bdc3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.118743 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.118764 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7419b027-2686-4ba4-9459-30a4362d34f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.118774 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/850baae5-89be-441f-85e0-f2f0ec68bdc3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.118784 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngpd6\" (UniqueName: \"kubernetes.io/projected/7419b027-2686-4ba4-9459-30a4362d34f0-kube-api-access-ngpd6\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.121223 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-kube-api-access-krhzh" (OuterVolumeSpecName: "kube-api-access-krhzh") pod "58e700c8-ab25-47a2-a6cf-e85ffcb57e74" (UID: "58e700c8-ab25-47a2-a6cf-e85ffcb57e74"). InnerVolumeSpecName "kube-api-access-krhzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.128275 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850baae5-89be-441f-85e0-f2f0ec68bdc3-kube-api-access-b8885" (OuterVolumeSpecName: "kube-api-access-b8885") pod "850baae5-89be-441f-85e0-f2f0ec68bdc3" (UID: "850baae5-89be-441f-85e0-f2f0ec68bdc3"). InnerVolumeSpecName "kube-api-access-b8885". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.154845 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l2f2z" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.156607 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6mgt5" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.156604 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24cc6fe1-da44-4d61-98bf-3088b398903b" path="/var/lib/kubelet/pods/24cc6fe1-da44-4d61-98bf-3088b398903b/volumes" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.160320 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mp9g8" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.161906 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c2d-account-create-update-5rmst" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.163871 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cw2fg" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.167990 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l2f2z" event={"ID":"bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c","Type":"ContainerDied","Data":"67e1d9e4beb27bf149e3172995f31de56d2719eb7b25ce4c319edba907379192"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.173825 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67e1d9e4beb27bf149e3172995f31de56d2719eb7b25ce4c319edba907379192" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.173993 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6mgt5" event={"ID":"7419b027-2686-4ba4-9459-30a4362d34f0","Type":"ContainerDied","Data":"c89dbe2cc7630ae1cc4dfb777a53044b9caf01f9b81ec512acbb427ca87dadf9"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174148 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c89dbe2cc7630ae1cc4dfb777a53044b9caf01f9b81ec512acbb427ca87dadf9" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174266 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mp9g8" event={"ID":"56341195-0325-4b22-ba76-8f792fbbcdb6","Type":"ContainerDied","Data":"d1561dcdfaac7c99f53a2dd25dc15dd288466f9c31855a26306f9f871e78f225"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174404 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1561dcdfaac7c99f53a2dd25dc15dd288466f9c31855a26306f9f871e78f225" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174530 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c2d-account-create-update-5rmst" event={"ID":"dbacbd93-bbc0-4360-bc45-9782988bd3c0","Type":"ContainerDied","Data":"fc073784c031cac98470bba284bdb32968853c4aeeff19e47471f3b9dbc91465"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174681 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc073784c031cac98470bba284bdb32968853c4aeeff19e47471f3b9dbc91465" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174800 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cw2fg" event={"ID":"850baae5-89be-441f-85e0-f2f0ec68bdc3","Type":"ContainerDied","Data":"590c5689226b24e8a79cadbae587b15db602a7fa85141bb00ffbdcd1faf2d3ef"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174934 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="590c5689226b24e8a79cadbae587b15db602a7fa85141bb00ffbdcd1faf2d3ef" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174959 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerStarted","Data":"8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174764 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fc9-account-create-update-hsl6c" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174983 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fc9-account-create-update-hsl6c" event={"ID":"58e700c8-ab25-47a2-a6cf-e85ffcb57e74","Type":"ContainerDied","Data":"ff8a1308f30cac05f4582dcef33e2089bd45ba7c33c330702b7e8ec8f4a48526"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.174993 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff8a1308f30cac05f4582dcef33e2089bd45ba7c33c330702b7e8ec8f4a48526" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.176554 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1e92-account-create-update-s8tnj" event={"ID":"850d66dd-e985-408b-93a0-8251cfd8dbc5","Type":"ContainerDied","Data":"285375d2088a10c12e0cc841d85c9fdfa40b8c2ff310c72a4cadbe5048c52b8c"} Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.176586 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="285375d2088a10c12e0cc841d85c9fdfa40b8c2ff310c72a4cadbe5048c52b8c" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.176620 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1e92-account-create-update-s8tnj" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.221163 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krhzh\" (UniqueName: \"kubernetes.io/projected/58e700c8-ab25-47a2-a6cf-e85ffcb57e74-kube-api-access-krhzh\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.221192 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8885\" (UniqueName: \"kubernetes.io/projected/850baae5-89be-441f-85e0-f2f0ec68bdc3-kube-api-access-b8885\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.913981 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-l2f2z"] Feb 17 16:13:41 crc kubenswrapper[4808]: I0217 16:13:41.923354 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-l2f2z"] Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.007466 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-kt8sq"] Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008293 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7419b027-2686-4ba4-9459-30a4362d34f0" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008317 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7419b027-2686-4ba4-9459-30a4362d34f0" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008333 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008341 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008352 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56341195-0325-4b22-ba76-8f792fbbcdb6" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008360 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="56341195-0325-4b22-ba76-8f792fbbcdb6" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008391 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerName="dnsmasq-dns" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008399 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerName="dnsmasq-dns" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008410 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850d66dd-e985-408b-93a0-8251cfd8dbc5" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008419 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="850d66dd-e985-408b-93a0-8251cfd8dbc5" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008430 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58e700c8-ab25-47a2-a6cf-e85ffcb57e74" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008438 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="58e700c8-ab25-47a2-a6cf-e85ffcb57e74" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008447 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerName="init" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008454 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerName="init" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008463 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850baae5-89be-441f-85e0-f2f0ec68bdc3" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008471 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="850baae5-89be-441f-85e0-f2f0ec68bdc3" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: E0217 16:13:42.008485 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbacbd93-bbc0-4360-bc45-9782988bd3c0" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008493 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbacbd93-bbc0-4360-bc45-9782988bd3c0" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008707 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="850baae5-89be-441f-85e0-f2f0ec68bdc3" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008728 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="58e700c8-ab25-47a2-a6cf-e85ffcb57e74" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008738 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbacbd93-bbc0-4360-bc45-9782988bd3c0" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008749 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="24cc6fe1-da44-4d61-98bf-3088b398903b" containerName="dnsmasq-dns" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008760 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="56341195-0325-4b22-ba76-8f792fbbcdb6" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008779 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7419b027-2686-4ba4-9459-30a4362d34f0" containerName="mariadb-database-create" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008787 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="850d66dd-e985-408b-93a0-8251cfd8dbc5" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.008795 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c" containerName="mariadb-account-create-update" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.009466 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.014675 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.023204 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kt8sq"] Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.135088 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn7p2\" (UniqueName: \"kubernetes.io/projected/6940f857-9d37-4d69-8b1a-33208fe6de43-kube-api-access-kn7p2\") pod \"root-account-create-update-kt8sq\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.135249 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6940f857-9d37-4d69-8b1a-33208fe6de43-operator-scripts\") pod \"root-account-create-update-kt8sq\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.237057 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn7p2\" (UniqueName: \"kubernetes.io/projected/6940f857-9d37-4d69-8b1a-33208fe6de43-kube-api-access-kn7p2\") pod \"root-account-create-update-kt8sq\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.238736 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6940f857-9d37-4d69-8b1a-33208fe6de43-operator-scripts\") pod \"root-account-create-update-kt8sq\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.239480 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6940f857-9d37-4d69-8b1a-33208fe6de43-operator-scripts\") pod \"root-account-create-update-kt8sq\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.258294 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn7p2\" (UniqueName: \"kubernetes.io/projected/6940f857-9d37-4d69-8b1a-33208fe6de43-kube-api-access-kn7p2\") pod \"root-account-create-update-kt8sq\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.331629 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:42 crc kubenswrapper[4808]: I0217 16:13:42.705495 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-kt8sq"] Feb 17 16:13:43 crc kubenswrapper[4808]: I0217 16:13:43.166238 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c" path="/var/lib/kubelet/pods/bc5e9f09-05c9-4fa2-8e39-22ffa4fa8d2c/volumes" Feb 17 16:13:43 crc kubenswrapper[4808]: I0217 16:13:43.210379 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kt8sq" event={"ID":"6940f857-9d37-4d69-8b1a-33208fe6de43","Type":"ContainerStarted","Data":"aa9c642e8bb62ae5d91fda2bdf24643392c75706213200f28e2d16c8e6a33f94"} Feb 17 16:13:43 crc kubenswrapper[4808]: I0217 16:13:43.210419 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kt8sq" event={"ID":"6940f857-9d37-4d69-8b1a-33208fe6de43","Type":"ContainerStarted","Data":"98c2800077894190b1e9521bc93e98e57fb1374bafdeb5e31d595195ddc58cf4"} Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.220871 4808 generic.go:334] "Generic (PLEG): container finished" podID="698c36e9-5f87-4836-8660-aaceac669005" containerID="19fb997acb847b4585d9f3a1732ebf382a63b29716209b27bb21be0c936a6430" exitCode=0 Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.220954 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"698c36e9-5f87-4836-8660-aaceac669005","Type":"ContainerDied","Data":"19fb997acb847b4585d9f3a1732ebf382a63b29716209b27bb21be0c936a6430"} Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.222905 4808 generic.go:334] "Generic (PLEG): container finished" podID="6940f857-9d37-4d69-8b1a-33208fe6de43" containerID="aa9c642e8bb62ae5d91fda2bdf24643392c75706213200f28e2d16c8e6a33f94" exitCode=0 Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.222953 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kt8sq" event={"ID":"6940f857-9d37-4d69-8b1a-33208fe6de43","Type":"ContainerDied","Data":"aa9c642e8bb62ae5d91fda2bdf24643392c75706213200f28e2d16c8e6a33f94"} Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.225126 4808 generic.go:334] "Generic (PLEG): container finished" podID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerID="5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9" exitCode=0 Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.225171 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59be2048-a5c9-44c9-a3ef-651002555ff0","Type":"ContainerDied","Data":"5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9"} Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.256831 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-kt8sq" podStartSLOduration=3.256794888 podStartE2EDuration="3.256794888s" podCreationTimestamp="2026-02-17 16:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:13:43.22788208 +0000 UTC m=+1186.744241153" watchObservedRunningTime="2026-02-17 16:13:44.256794888 +0000 UTC m=+1187.773153961" Feb 17 16:13:44 crc kubenswrapper[4808]: I0217 16:13:44.340449 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:13:44 crc kubenswrapper[4808]: E0217 16:13:44.341322 4808 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 17 16:13:44 crc kubenswrapper[4808]: E0217 16:13:44.341351 4808 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 17 16:13:44 crc kubenswrapper[4808]: E0217 16:13:44.341398 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift podName:8f52ebe4-f003-4d0b-8539-1d406db95b2f nodeName:}" failed. No retries permitted until 2026-02-17 16:14:00.341379059 +0000 UTC m=+1203.857738222 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift") pod "swift-storage-0" (UID: "8f52ebe4-f003-4d0b-8539-1d406db95b2f") : configmap "swift-ring-files" not found Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.235525 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerStarted","Data":"3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7"} Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.244921 4808 generic.go:334] "Generic (PLEG): container finished" podID="eb2856a7-c37a-4ecc-a4a2-c49864240315" containerID="531cd6842c615f80a678de85ab5ffd56ce530c2a4ddaf1a8a62d7dbfe638cf33" exitCode=0 Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.244979 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qg65w" event={"ID":"eb2856a7-c37a-4ecc-a4a2-c49864240315","Type":"ContainerDied","Data":"531cd6842c615f80a678de85ab5ffd56ce530c2a4ddaf1a8a62d7dbfe638cf33"} Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.248856 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"698c36e9-5f87-4836-8660-aaceac669005","Type":"ContainerStarted","Data":"d280bb8f394e232e2279b423416261e7f2f5d4ad76577ac87b19691f2c6abe5e"} Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.249118 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.252915 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59be2048-a5c9-44c9-a3ef-651002555ff0","Type":"ContainerStarted","Data":"a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807"} Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.253529 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.270742 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.15672832 podStartE2EDuration="1m8.27070954s" podCreationTimestamp="2026-02-17 16:12:37 +0000 UTC" firstStartedPulling="2026-02-17 16:12:50.127329789 +0000 UTC m=+1133.643688862" lastFinishedPulling="2026-02-17 16:13:44.241311009 +0000 UTC m=+1187.757670082" observedRunningTime="2026-02-17 16:13:45.268380878 +0000 UTC m=+1188.784739951" watchObservedRunningTime="2026-02-17 16:13:45.27070954 +0000 UTC m=+1188.787068613" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.347507 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=57.555325178 podStartE2EDuration="1m15.347484359s" podCreationTimestamp="2026-02-17 16:12:30 +0000 UTC" firstStartedPulling="2026-02-17 16:12:49.427320235 +0000 UTC m=+1132.943679308" lastFinishedPulling="2026-02-17 16:13:07.219479416 +0000 UTC m=+1150.735838489" observedRunningTime="2026-02-17 16:13:45.318197746 +0000 UTC m=+1188.834556819" watchObservedRunningTime="2026-02-17 16:13:45.347484359 +0000 UTC m=+1188.863843432" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.348793 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=56.738267396 podStartE2EDuration="1m15.348787774s" podCreationTimestamp="2026-02-17 16:12:30 +0000 UTC" firstStartedPulling="2026-02-17 16:12:48.992849162 +0000 UTC m=+1132.509208235" lastFinishedPulling="2026-02-17 16:13:07.60336953 +0000 UTC m=+1151.119728613" observedRunningTime="2026-02-17 16:13:45.342212276 +0000 UTC m=+1188.858571359" watchObservedRunningTime="2026-02-17 16:13:45.348787774 +0000 UTC m=+1188.865146847" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.630278 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-4mdzt"] Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.631616 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.634848 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.635407 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xhb8t" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.643220 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4mdzt"] Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.766839 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.778821 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-combined-ca-bundle\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.778866 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb486\" (UniqueName: \"kubernetes.io/projected/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-kube-api-access-rb486\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.779015 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-db-sync-config-data\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.779049 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-config-data\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.815264 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.820755 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-pfcvm" podUID="8a76a2ff-ed1a-4279-898c-54e85973f024" containerName="ovn-controller" probeResult="failure" output=< Feb 17 16:13:45 crc kubenswrapper[4808]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 17 16:13:45 crc kubenswrapper[4808]: > Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.821437 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-wkzp6" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.880207 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6940f857-9d37-4d69-8b1a-33208fe6de43-operator-scripts\") pod \"6940f857-9d37-4d69-8b1a-33208fe6de43\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.880378 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn7p2\" (UniqueName: \"kubernetes.io/projected/6940f857-9d37-4d69-8b1a-33208fe6de43-kube-api-access-kn7p2\") pod \"6940f857-9d37-4d69-8b1a-33208fe6de43\" (UID: \"6940f857-9d37-4d69-8b1a-33208fe6de43\") " Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.880620 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-combined-ca-bundle\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.880655 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb486\" (UniqueName: \"kubernetes.io/projected/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-kube-api-access-rb486\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.880763 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-db-sync-config-data\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.880787 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-config-data\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.883248 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6940f857-9d37-4d69-8b1a-33208fe6de43-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6940f857-9d37-4d69-8b1a-33208fe6de43" (UID: "6940f857-9d37-4d69-8b1a-33208fe6de43"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.889309 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-db-sync-config-data\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.889768 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-config-data\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.898192 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-combined-ca-bundle\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.898399 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6940f857-9d37-4d69-8b1a-33208fe6de43-kube-api-access-kn7p2" (OuterVolumeSpecName: "kube-api-access-kn7p2") pod "6940f857-9d37-4d69-8b1a-33208fe6de43" (UID: "6940f857-9d37-4d69-8b1a-33208fe6de43"). InnerVolumeSpecName "kube-api-access-kn7p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.920242 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb486\" (UniqueName: \"kubernetes.io/projected/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-kube-api-access-rb486\") pod \"glance-db-sync-4mdzt\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.949014 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4mdzt" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.985748 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6940f857-9d37-4d69-8b1a-33208fe6de43-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:45 crc kubenswrapper[4808]: I0217 16:13:45.985784 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn7p2\" (UniqueName: \"kubernetes.io/projected/6940f857-9d37-4d69-8b1a-33208fe6de43-kube-api-access-kn7p2\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.272905 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-kt8sq" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.276889 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-kt8sq" event={"ID":"6940f857-9d37-4d69-8b1a-33208fe6de43","Type":"ContainerDied","Data":"98c2800077894190b1e9521bc93e98e57fb1374bafdeb5e31d595195ddc58cf4"} Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.276950 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98c2800077894190b1e9521bc93e98e57fb1374bafdeb5e31d595195ddc58cf4" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.285064 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-pfcvm-config-zqwjk"] Feb 17 16:13:46 crc kubenswrapper[4808]: E0217 16:13:46.286806 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6940f857-9d37-4d69-8b1a-33208fe6de43" containerName="mariadb-account-create-update" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.286826 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6940f857-9d37-4d69-8b1a-33208fe6de43" containerName="mariadb-account-create-update" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.287100 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6940f857-9d37-4d69-8b1a-33208fe6de43" containerName="mariadb-account-create-update" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.290371 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.297800 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.368646 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pfcvm-config-zqwjk"] Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.409772 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run-ovn\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.409903 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.409926 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-log-ovn\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.410009 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfnrg\" (UniqueName: \"kubernetes.io/projected/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-kube-api-access-gfnrg\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.410062 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-scripts\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.410221 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-additional-scripts\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514361 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514399 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-log-ovn\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514438 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfnrg\" (UniqueName: \"kubernetes.io/projected/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-kube-api-access-gfnrg\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514479 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-scripts\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514540 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-additional-scripts\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514654 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run-ovn\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514782 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-log-ovn\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514811 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.514866 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run-ovn\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.515710 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-additional-scripts\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.517707 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-scripts\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.543967 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfnrg\" (UniqueName: \"kubernetes.io/projected/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-kube-api-access-gfnrg\") pod \"ovn-controller-pfcvm-config-zqwjk\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.644774 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.773560 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-4mdzt"] Feb 17 16:13:46 crc kubenswrapper[4808]: I0217 16:13:46.926596 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.023016 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-swiftconf\") pod \"eb2856a7-c37a-4ecc-a4a2-c49864240315\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.023510 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-ring-data-devices\") pod \"eb2856a7-c37a-4ecc-a4a2-c49864240315\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.023568 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/eb2856a7-c37a-4ecc-a4a2-c49864240315-etc-swift\") pod \"eb2856a7-c37a-4ecc-a4a2-c49864240315\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.023625 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-dispersionconf\") pod \"eb2856a7-c37a-4ecc-a4a2-c49864240315\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.023802 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-combined-ca-bundle\") pod \"eb2856a7-c37a-4ecc-a4a2-c49864240315\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.023877 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vndk\" (UniqueName: \"kubernetes.io/projected/eb2856a7-c37a-4ecc-a4a2-c49864240315-kube-api-access-9vndk\") pod \"eb2856a7-c37a-4ecc-a4a2-c49864240315\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.023910 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-scripts\") pod \"eb2856a7-c37a-4ecc-a4a2-c49864240315\" (UID: \"eb2856a7-c37a-4ecc-a4a2-c49864240315\") " Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.033252 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "eb2856a7-c37a-4ecc-a4a2-c49864240315" (UID: "eb2856a7-c37a-4ecc-a4a2-c49864240315"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.033482 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb2856a7-c37a-4ecc-a4a2-c49864240315-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "eb2856a7-c37a-4ecc-a4a2-c49864240315" (UID: "eb2856a7-c37a-4ecc-a4a2-c49864240315"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.034288 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "eb2856a7-c37a-4ecc-a4a2-c49864240315" (UID: "eb2856a7-c37a-4ecc-a4a2-c49864240315"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.058816 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb2856a7-c37a-4ecc-a4a2-c49864240315-kube-api-access-9vndk" (OuterVolumeSpecName: "kube-api-access-9vndk") pod "eb2856a7-c37a-4ecc-a4a2-c49864240315" (UID: "eb2856a7-c37a-4ecc-a4a2-c49864240315"). InnerVolumeSpecName "kube-api-access-9vndk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.059292 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-scripts" (OuterVolumeSpecName: "scripts") pod "eb2856a7-c37a-4ecc-a4a2-c49864240315" (UID: "eb2856a7-c37a-4ecc-a4a2-c49864240315"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.073132 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "eb2856a7-c37a-4ecc-a4a2-c49864240315" (UID: "eb2856a7-c37a-4ecc-a4a2-c49864240315"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.082786 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb2856a7-c37a-4ecc-a4a2-c49864240315" (UID: "eb2856a7-c37a-4ecc-a4a2-c49864240315"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.126134 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.126355 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vndk\" (UniqueName: \"kubernetes.io/projected/eb2856a7-c37a-4ecc-a4a2-c49864240315-kube-api-access-9vndk\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.126476 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.126559 4808 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.126713 4808 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/eb2856a7-c37a-4ecc-a4a2-c49864240315-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.126780 4808 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/eb2856a7-c37a-4ecc-a4a2-c49864240315-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.126873 4808 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/eb2856a7-c37a-4ecc-a4a2-c49864240315-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.217355 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-pfcvm-config-zqwjk"] Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.286688 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pfcvm-config-zqwjk" event={"ID":"92655725-c36f-4e8a-bdb4-12fa4e41a3d7","Type":"ContainerStarted","Data":"04e048c7a3bbfd39b61c305cae990b37bd53a929ece350691f5e86c6d1b68fd6"} Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.289575 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-qg65w" event={"ID":"eb2856a7-c37a-4ecc-a4a2-c49864240315","Type":"ContainerDied","Data":"c158428c095eaa91f94460c1176f203740b31134ec5ab68c67c7165466a47208"} Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.289631 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c158428c095eaa91f94460c1176f203740b31134ec5ab68c67c7165466a47208" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.290137 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-qg65w" Feb 17 16:13:47 crc kubenswrapper[4808]: I0217 16:13:47.291445 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4mdzt" event={"ID":"e4002815-8dd4-4668-bea7-0d54bdaa4dd6","Type":"ContainerStarted","Data":"e5bfc747bb74b14a5184eb3f8c16443aca59a2667d60646ea7965a405418e0b0"} Feb 17 16:13:48 crc kubenswrapper[4808]: I0217 16:13:48.214884 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="c7929d5b-e791-419e-8039-50cc9f8202f2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:13:48 crc kubenswrapper[4808]: I0217 16:13:48.561980 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 16:13:48 crc kubenswrapper[4808]: I0217 16:13:48.636097 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-kt8sq"] Feb 17 16:13:48 crc kubenswrapper[4808]: I0217 16:13:48.644938 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-kt8sq"] Feb 17 16:13:49 crc kubenswrapper[4808]: I0217 16:13:49.154545 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6940f857-9d37-4d69-8b1a-33208fe6de43" path="/var/lib/kubelet/pods/6940f857-9d37-4d69-8b1a-33208fe6de43/volumes" Feb 17 16:13:50 crc kubenswrapper[4808]: I0217 16:13:50.315266 4808 generic.go:334] "Generic (PLEG): container finished" podID="92655725-c36f-4e8a-bdb4-12fa4e41a3d7" containerID="393504cd886f25701edec85a116ae5e2c966bd8cc6f3213385ba9edc2a2c6ec3" exitCode=0 Feb 17 16:13:50 crc kubenswrapper[4808]: I0217 16:13:50.315332 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pfcvm-config-zqwjk" event={"ID":"92655725-c36f-4e8a-bdb4-12fa4e41a3d7","Type":"ContainerDied","Data":"393504cd886f25701edec85a116ae5e2c966bd8cc6f3213385ba9edc2a2c6ec3"} Feb 17 16:13:50 crc kubenswrapper[4808]: I0217 16:13:50.441762 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 17 16:13:50 crc kubenswrapper[4808]: I0217 16:13:50.794178 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-pfcvm" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.675506 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843158 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-log-ovn\") pod \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843255 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "92655725-c36f-4e8a-bdb4-12fa4e41a3d7" (UID: "92655725-c36f-4e8a-bdb4-12fa4e41a3d7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843298 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run-ovn\") pod \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843362 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfnrg\" (UniqueName: \"kubernetes.io/projected/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-kube-api-access-gfnrg\") pod \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843384 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "92655725-c36f-4e8a-bdb4-12fa4e41a3d7" (UID: "92655725-c36f-4e8a-bdb4-12fa4e41a3d7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843420 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-scripts\") pod \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843465 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run\") pod \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843488 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-additional-scripts\") pod \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\" (UID: \"92655725-c36f-4e8a-bdb4-12fa4e41a3d7\") " Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843881 4808 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.843895 4808 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.844476 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "92655725-c36f-4e8a-bdb4-12fa4e41a3d7" (UID: "92655725-c36f-4e8a-bdb4-12fa4e41a3d7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.844571 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run" (OuterVolumeSpecName: "var-run") pod "92655725-c36f-4e8a-bdb4-12fa4e41a3d7" (UID: "92655725-c36f-4e8a-bdb4-12fa4e41a3d7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.845518 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-scripts" (OuterVolumeSpecName: "scripts") pod "92655725-c36f-4e8a-bdb4-12fa4e41a3d7" (UID: "92655725-c36f-4e8a-bdb4-12fa4e41a3d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.851864 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-kube-api-access-gfnrg" (OuterVolumeSpecName: "kube-api-access-gfnrg") pod "92655725-c36f-4e8a-bdb4-12fa4e41a3d7" (UID: "92655725-c36f-4e8a-bdb4-12fa4e41a3d7"). InnerVolumeSpecName "kube-api-access-gfnrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.945556 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfnrg\" (UniqueName: \"kubernetes.io/projected/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-kube-api-access-gfnrg\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.945599 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.945608 4808 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-var-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:51 crc kubenswrapper[4808]: I0217 16:13:51.945618 4808 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/92655725-c36f-4e8a-bdb4-12fa4e41a3d7-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:13:52 crc kubenswrapper[4808]: I0217 16:13:52.334480 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-pfcvm-config-zqwjk" event={"ID":"92655725-c36f-4e8a-bdb4-12fa4e41a3d7","Type":"ContainerDied","Data":"04e048c7a3bbfd39b61c305cae990b37bd53a929ece350691f5e86c6d1b68fd6"} Feb 17 16:13:52 crc kubenswrapper[4808]: I0217 16:13:52.334748 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04e048c7a3bbfd39b61c305cae990b37bd53a929ece350691f5e86c6d1b68fd6" Feb 17 16:13:52 crc kubenswrapper[4808]: I0217 16:13:52.334533 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-pfcvm-config-zqwjk" Feb 17 16:13:52 crc kubenswrapper[4808]: I0217 16:13:52.772427 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-pfcvm-config-zqwjk"] Feb 17 16:13:52 crc kubenswrapper[4808]: I0217 16:13:52.778749 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-pfcvm-config-zqwjk"] Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.158897 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92655725-c36f-4e8a-bdb4-12fa4e41a3d7" path="/var/lib/kubelet/pods/92655725-c36f-4e8a-bdb4-12fa4e41a3d7/volumes" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.561531 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.564879 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.671703 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-f2jqv"] Feb 17 16:13:53 crc kubenswrapper[4808]: E0217 16:13:53.672158 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb2856a7-c37a-4ecc-a4a2-c49864240315" containerName="swift-ring-rebalance" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.672181 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb2856a7-c37a-4ecc-a4a2-c49864240315" containerName="swift-ring-rebalance" Feb 17 16:13:53 crc kubenswrapper[4808]: E0217 16:13:53.672190 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92655725-c36f-4e8a-bdb4-12fa4e41a3d7" containerName="ovn-config" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.672196 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="92655725-c36f-4e8a-bdb4-12fa4e41a3d7" containerName="ovn-config" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.672395 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="92655725-c36f-4e8a-bdb4-12fa4e41a3d7" containerName="ovn-config" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.672424 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb2856a7-c37a-4ecc-a4a2-c49864240315" containerName="swift-ring-rebalance" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.673732 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.676635 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.683916 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f2jqv"] Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.774728 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7377369f-b540-4b85-be05-4200c9695a41-operator-scripts\") pod \"root-account-create-update-f2jqv\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.775689 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t2pm\" (UniqueName: \"kubernetes.io/projected/7377369f-b540-4b85-be05-4200c9695a41-kube-api-access-9t2pm\") pod \"root-account-create-update-f2jqv\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.877063 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7377369f-b540-4b85-be05-4200c9695a41-operator-scripts\") pod \"root-account-create-update-f2jqv\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.877366 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t2pm\" (UniqueName: \"kubernetes.io/projected/7377369f-b540-4b85-be05-4200c9695a41-kube-api-access-9t2pm\") pod \"root-account-create-update-f2jqv\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.877769 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7377369f-b540-4b85-be05-4200c9695a41-operator-scripts\") pod \"root-account-create-update-f2jqv\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.895227 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t2pm\" (UniqueName: \"kubernetes.io/projected/7377369f-b540-4b85-be05-4200c9695a41-kube-api-access-9t2pm\") pod \"root-account-create-update-f2jqv\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:53 crc kubenswrapper[4808]: I0217 16:13:53.988585 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2jqv" Feb 17 16:13:54 crc kubenswrapper[4808]: I0217 16:13:54.352058 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 16:13:54 crc kubenswrapper[4808]: I0217 16:13:54.467296 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f2jqv"] Feb 17 16:13:56 crc kubenswrapper[4808]: I0217 16:13:56.831396 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:13:56 crc kubenswrapper[4808]: I0217 16:13:56.831988 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="prometheus" containerID="cri-o://4b0c39d37d11b4b4e6ab329ec7e07436445d5087b94a405b5022cc84ee9f2693" gracePeriod=600 Feb 17 16:13:56 crc kubenswrapper[4808]: I0217 16:13:56.832219 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="thanos-sidecar" containerID="cri-o://3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7" gracePeriod=600 Feb 17 16:13:56 crc kubenswrapper[4808]: I0217 16:13:56.832323 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="config-reloader" containerID="cri-o://8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc" gracePeriod=600 Feb 17 16:13:57 crc kubenswrapper[4808]: I0217 16:13:57.382099 4808 generic.go:334] "Generic (PLEG): container finished" podID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerID="3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7" exitCode=0 Feb 17 16:13:57 crc kubenswrapper[4808]: I0217 16:13:57.382696 4808 generic.go:334] "Generic (PLEG): container finished" podID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerID="8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc" exitCode=0 Feb 17 16:13:57 crc kubenswrapper[4808]: I0217 16:13:57.382712 4808 generic.go:334] "Generic (PLEG): container finished" podID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerID="4b0c39d37d11b4b4e6ab329ec7e07436445d5087b94a405b5022cc84ee9f2693" exitCode=0 Feb 17 16:13:57 crc kubenswrapper[4808]: I0217 16:13:57.382198 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerDied","Data":"3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7"} Feb 17 16:13:57 crc kubenswrapper[4808]: I0217 16:13:57.382756 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerDied","Data":"8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc"} Feb 17 16:13:57 crc kubenswrapper[4808]: I0217 16:13:57.382775 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerDied","Data":"4b0c39d37d11b4b4e6ab329ec7e07436445d5087b94a405b5022cc84ee9f2693"} Feb 17 16:13:58 crc kubenswrapper[4808]: I0217 16:13:58.215185 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="c7929d5b-e791-419e-8039-50cc9f8202f2" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 17 16:13:58 crc kubenswrapper[4808]: I0217 16:13:58.563106 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Feb 17 16:14:00 crc kubenswrapper[4808]: I0217 16:14:00.412972 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:14:00 crc kubenswrapper[4808]: I0217 16:14:00.460331 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8f52ebe4-f003-4d0b-8539-1d406db95b2f-etc-swift\") pod \"swift-storage-0\" (UID: \"8f52ebe4-f003-4d0b-8539-1d406db95b2f\") " pod="openstack/swift-storage-0" Feb 17 16:14:00 crc kubenswrapper[4808]: I0217 16:14:00.462127 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 17 16:14:01 crc kubenswrapper[4808]: I0217 16:14:01.788840 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.047210 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.167093 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-jmq6n"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.173905 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.181721 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jmq6n"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.304326 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-78cc-account-create-update-k7vgl"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.305908 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.310399 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.322384 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-78cc-account-create-update-k7vgl"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.350026 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx65n\" (UniqueName: \"kubernetes.io/projected/3ccecd7d-0e59-4336-a6ec-a595adbb727e-kube-api-access-mx65n\") pod \"cinder-db-create-jmq6n\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.350108 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ccecd7d-0e59-4336-a6ec-a595adbb727e-operator-scripts\") pod \"cinder-db-create-jmq6n\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.452077 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e183e901-16a0-43cf-9ce5-ef36da8686d1-operator-scripts\") pod \"cinder-78cc-account-create-update-k7vgl\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.452239 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx65n\" (UniqueName: \"kubernetes.io/projected/3ccecd7d-0e59-4336-a6ec-a595adbb727e-kube-api-access-mx65n\") pod \"cinder-db-create-jmq6n\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.452317 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rj74\" (UniqueName: \"kubernetes.io/projected/e183e901-16a0-43cf-9ce5-ef36da8686d1-kube-api-access-7rj74\") pod \"cinder-78cc-account-create-update-k7vgl\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.452349 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ccecd7d-0e59-4336-a6ec-a595adbb727e-operator-scripts\") pod \"cinder-db-create-jmq6n\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.453131 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ccecd7d-0e59-4336-a6ec-a595adbb727e-operator-scripts\") pod \"cinder-db-create-jmq6n\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.473518 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-r5lfk"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.474967 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.490624 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx65n\" (UniqueName: \"kubernetes.io/projected/3ccecd7d-0e59-4336-a6ec-a595adbb727e-kube-api-access-mx65n\") pod \"cinder-db-create-jmq6n\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.496529 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.497074 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59d8-account-create-update-5vsvx"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.498418 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.500472 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.512725 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59d8-account-create-update-5vsvx"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.525802 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-r5lfk"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.554871 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72e328d4-94e9-42bc-ae1c-b07b01d80072-operator-scripts\") pod \"cloudkitty-db-create-r5lfk\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.555297 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rj74\" (UniqueName: \"kubernetes.io/projected/e183e901-16a0-43cf-9ce5-ef36da8686d1-kube-api-access-7rj74\") pod \"cinder-78cc-account-create-update-k7vgl\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.555348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx7rg\" (UniqueName: \"kubernetes.io/projected/72e328d4-94e9-42bc-ae1c-b07b01d80072-kube-api-access-sx7rg\") pod \"cloudkitty-db-create-r5lfk\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.555472 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e183e901-16a0-43cf-9ce5-ef36da8686d1-operator-scripts\") pod \"cinder-78cc-account-create-update-k7vgl\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.556312 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e183e901-16a0-43cf-9ce5-ef36da8686d1-operator-scripts\") pod \"cinder-78cc-account-create-update-k7vgl\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.588896 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rj74\" (UniqueName: \"kubernetes.io/projected/e183e901-16a0-43cf-9ce5-ef36da8686d1-kube-api-access-7rj74\") pod \"cinder-78cc-account-create-update-k7vgl\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.590763 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jqrq2"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.591946 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.599305 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8c80-account-create-update-rk4jj"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.600352 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.609880 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jqrq2"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.614860 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.619242 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8c80-account-create-update-rk4jj"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.626057 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.657977 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72e328d4-94e9-42bc-ae1c-b07b01d80072-operator-scripts\") pod \"cloudkitty-db-create-r5lfk\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.658036 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02478fdd-380d-42f9-b105-c3ae86d224a8-operator-scripts\") pod \"neutron-59d8-account-create-update-5vsvx\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.658109 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx7rg\" (UniqueName: \"kubernetes.io/projected/72e328d4-94e9-42bc-ae1c-b07b01d80072-kube-api-access-sx7rg\") pod \"cloudkitty-db-create-r5lfk\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.658149 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r6zf\" (UniqueName: \"kubernetes.io/projected/02478fdd-380d-42f9-b105-c3ae86d224a8-kube-api-access-6r6zf\") pod \"neutron-59d8-account-create-update-5vsvx\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.660001 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72e328d4-94e9-42bc-ae1c-b07b01d80072-operator-scripts\") pod \"cloudkitty-db-create-r5lfk\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.671014 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-kzjns"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.673028 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.676268 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.676494 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6x2tm" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.676886 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.684892 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.687385 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx7rg\" (UniqueName: \"kubernetes.io/projected/72e328d4-94e9-42bc-ae1c-b07b01d80072-kube-api-access-sx7rg\") pod \"cloudkitty-db-create-r5lfk\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.689699 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ktddg"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.690862 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.723744 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ktddg"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.740782 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kzjns"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.761490 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6rjq\" (UniqueName: \"kubernetes.io/projected/41c68bd6-6280-4a89-be87-4d65f06a5a4d-kube-api-access-f6rjq\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.761847 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02478fdd-380d-42f9-b105-c3ae86d224a8-operator-scripts\") pod \"neutron-59d8-account-create-update-5vsvx\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762025 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c02cbd83-d077-4812-b852-7fe9a0182b71-operator-scripts\") pod \"barbican-db-create-jqrq2\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762141 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r6zf\" (UniqueName: \"kubernetes.io/projected/02478fdd-380d-42f9-b105-c3ae86d224a8-kube-api-access-6r6zf\") pod \"neutron-59d8-account-create-update-5vsvx\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-combined-ca-bundle\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762347 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-operator-scripts\") pod \"barbican-8c80-account-create-update-rk4jj\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762465 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj2f8\" (UniqueName: \"kubernetes.io/projected/c02cbd83-d077-4812-b852-7fe9a0182b71-kube-api-access-xj2f8\") pod \"barbican-db-create-jqrq2\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762559 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-config-data\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762705 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8xgn\" (UniqueName: \"kubernetes.io/projected/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-kube-api-access-j8xgn\") pod \"barbican-8c80-account-create-update-rk4jj\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.762562 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02478fdd-380d-42f9-b105-c3ae86d224a8-operator-scripts\") pod \"neutron-59d8-account-create-update-5vsvx\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.792190 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r6zf\" (UniqueName: \"kubernetes.io/projected/02478fdd-380d-42f9-b105-c3ae86d224a8-kube-api-access-6r6zf\") pod \"neutron-59d8-account-create-update-5vsvx\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.842543 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.855749 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.861037 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-a9c6-account-create-update-48vv8"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.862248 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864458 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864518 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c02cbd83-d077-4812-b852-7fe9a0182b71-operator-scripts\") pod \"barbican-db-create-jqrq2\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864562 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-combined-ca-bundle\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864601 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dspfh\" (UniqueName: \"kubernetes.io/projected/ff670244-5344-4409-9823-6bfcf9ed274d-kube-api-access-dspfh\") pod \"neutron-db-create-ktddg\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864624 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-operator-scripts\") pod \"barbican-8c80-account-create-update-rk4jj\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864651 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff670244-5344-4409-9823-6bfcf9ed274d-operator-scripts\") pod \"neutron-db-create-ktddg\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864668 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj2f8\" (UniqueName: \"kubernetes.io/projected/c02cbd83-d077-4812-b852-7fe9a0182b71-kube-api-access-xj2f8\") pod \"barbican-db-create-jqrq2\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864687 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-config-data\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864708 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8xgn\" (UniqueName: \"kubernetes.io/projected/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-kube-api-access-j8xgn\") pod \"barbican-8c80-account-create-update-rk4jj\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.864789 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6rjq\" (UniqueName: \"kubernetes.io/projected/41c68bd6-6280-4a89-be87-4d65f06a5a4d-kube-api-access-f6rjq\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.865481 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-operator-scripts\") pod \"barbican-8c80-account-create-update-rk4jj\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.865595 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c02cbd83-d077-4812-b852-7fe9a0182b71-operator-scripts\") pod \"barbican-db-create-jqrq2\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.868885 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-combined-ca-bundle\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.869254 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-config-data\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.884814 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-a9c6-account-create-update-48vv8"] Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.886742 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6rjq\" (UniqueName: \"kubernetes.io/projected/41c68bd6-6280-4a89-be87-4d65f06a5a4d-kube-api-access-f6rjq\") pod \"keystone-db-sync-kzjns\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.887347 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8xgn\" (UniqueName: \"kubernetes.io/projected/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-kube-api-access-j8xgn\") pod \"barbican-8c80-account-create-update-rk4jj\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.918092 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj2f8\" (UniqueName: \"kubernetes.io/projected/c02cbd83-d077-4812-b852-7fe9a0182b71-kube-api-access-xj2f8\") pod \"barbican-db-create-jqrq2\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.966007 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dspfh\" (UniqueName: \"kubernetes.io/projected/ff670244-5344-4409-9823-6bfcf9ed274d-kube-api-access-dspfh\") pod \"neutron-db-create-ktddg\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.966328 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff670244-5344-4409-9823-6bfcf9ed274d-operator-scripts\") pod \"neutron-db-create-ktddg\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.966456 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495c4d6-8174-4b4d-9114-968620fbba31-operator-scripts\") pod \"cloudkitty-a9c6-account-create-update-48vv8\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.966476 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.966616 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dqw4\" (UniqueName: \"kubernetes.io/projected/2495c4d6-8174-4b4d-9114-968620fbba31-kube-api-access-5dqw4\") pod \"cloudkitty-a9c6-account-create-update-48vv8\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.967158 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff670244-5344-4409-9823-6bfcf9ed274d-operator-scripts\") pod \"neutron-db-create-ktddg\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.983916 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dspfh\" (UniqueName: \"kubernetes.io/projected/ff670244-5344-4409-9823-6bfcf9ed274d-kube-api-access-dspfh\") pod \"neutron-db-create-ktddg\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:02 crc kubenswrapper[4808]: I0217 16:14:02.983936 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.046063 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.057409 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.068429 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495c4d6-8174-4b4d-9114-968620fbba31-operator-scripts\") pod \"cloudkitty-a9c6-account-create-update-48vv8\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.068514 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dqw4\" (UniqueName: \"kubernetes.io/projected/2495c4d6-8174-4b4d-9114-968620fbba31-kube-api-access-5dqw4\") pod \"cloudkitty-a9c6-account-create-update-48vv8\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.069113 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495c4d6-8174-4b4d-9114-968620fbba31-operator-scripts\") pod \"cloudkitty-a9c6-account-create-update-48vv8\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.086541 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dqw4\" (UniqueName: \"kubernetes.io/projected/2495c4d6-8174-4b4d-9114-968620fbba31-kube-api-access-5dqw4\") pod \"cloudkitty-a9c6-account-create-update-48vv8\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.225920 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:03 crc kubenswrapper[4808]: W0217 16:14:03.522191 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7377369f_b540_4b85_be05_4200c9695a41.slice/crio-8d54a778f8d7c90911da4a862fcb3782ebd10a599385db7a3a37e16207cd66d3 WatchSource:0}: Error finding container 8d54a778f8d7c90911da4a862fcb3782ebd10a599385db7a3a37e16207cd66d3: Status 404 returned error can't find the container with id 8d54a778f8d7c90911da4a862fcb3782ebd10a599385db7a3a37e16207cd66d3 Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.526975 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 17 16:14:03 crc kubenswrapper[4808]: I0217 16:14:03.562352 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Feb 17 16:14:03 crc kubenswrapper[4808]: E0217 16:14:03.655047 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 17 16:14:03 crc kubenswrapper[4808]: E0217 16:14:03.655207 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rb486,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-4mdzt_openstack(e4002815-8dd4-4668-bea7-0d54bdaa4dd6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:03 crc kubenswrapper[4808]: E0217 16:14:03.656433 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-4mdzt" podUID="e4002815-8dd4-4668-bea7-0d54bdaa4dd6" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.132711 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301415 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301461 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-2\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301519 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-tls-assets\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301583 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config-out\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301612 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301658 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-web-config\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301683 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-thanos-prometheus-http-client-file\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301747 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-1\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301791 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-0\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.301819 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh7d7\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-kube-api-access-sh7d7\") pod \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\" (UID: \"2917eca2-0431-4bd6-ad96-ab8464cc4fd7\") " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.302856 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.303715 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.304716 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.324963 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.325033 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-kube-api-access-sh7d7" (OuterVolumeSpecName: "kube-api-access-sh7d7") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "kube-api-access-sh7d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.325177 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.331802 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config" (OuterVolumeSpecName: "config") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.331815 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config-out" (OuterVolumeSpecName: "config-out") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.364219 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.380059 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-web-config" (OuterVolumeSpecName: "web-config") pod "2917eca2-0431-4bd6-ad96-ab8464cc4fd7" (UID: "2917eca2-0431-4bd6-ad96-ab8464cc4fd7"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403621 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh7d7\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-kube-api-access-sh7d7\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403685 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") on node \"crc\" " Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403698 4808 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403711 4808 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403721 4808 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config-out\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403731 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403740 4808 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-web-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403749 4808 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403759 4808 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.403768 4808 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2917eca2-0431-4bd6-ad96-ab8464cc4fd7-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.423054 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.423248 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a") on node "crc" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.437785 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"2917eca2-0431-4bd6-ad96-ab8464cc4fd7","Type":"ContainerDied","Data":"c5db49362fb8e196d602a48475009fd093a64b0b760100ed93c1a54dba3d1832"} Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.437809 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.437841 4808 scope.go:117] "RemoveContainer" containerID="3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.441211 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2jqv" event={"ID":"7377369f-b540-4b85-be05-4200c9695a41","Type":"ContainerStarted","Data":"2318a25c8a4fd490438531d7eb31b39589b2387c36e3e5db64b5abeb8c178d66"} Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.441265 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2jqv" event={"ID":"7377369f-b540-4b85-be05-4200c9695a41","Type":"ContainerStarted","Data":"8d54a778f8d7c90911da4a862fcb3782ebd10a599385db7a3a37e16207cd66d3"} Feb 17 16:14:04 crc kubenswrapper[4808]: E0217 16:14:04.459803 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-4mdzt" podUID="e4002815-8dd4-4668-bea7-0d54bdaa4dd6" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.494150 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-f2jqv" podStartSLOduration=11.494133743999999 podStartE2EDuration="11.494133744s" podCreationTimestamp="2026-02-17 16:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:04.492712486 +0000 UTC m=+1208.009071559" watchObservedRunningTime="2026-02-17 16:14:04.494133744 +0000 UTC m=+1208.010492807" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.495702 4808 scope.go:117] "RemoveContainer" containerID="8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.505396 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.637732 4808 scope.go:117] "RemoveContainer" containerID="4b0c39d37d11b4b4e6ab329ec7e07436445d5087b94a405b5022cc84ee9f2693" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.645725 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.680921 4808 scope.go:117] "RemoveContainer" containerID="2fc63ca226fc458b6690177cc943e7e0ca56b5c8e5a076cf9854b9dccf7b50f0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.692317 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.702372 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:14:04 crc kubenswrapper[4808]: E0217 16:14:04.702824 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="config-reloader" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.702835 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="config-reloader" Feb 17 16:14:04 crc kubenswrapper[4808]: E0217 16:14:04.702849 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="init-config-reloader" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.702856 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="init-config-reloader" Feb 17 16:14:04 crc kubenswrapper[4808]: E0217 16:14:04.702868 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="thanos-sidecar" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.702875 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="thanos-sidecar" Feb 17 16:14:04 crc kubenswrapper[4808]: E0217 16:14:04.702887 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="prometheus" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.702893 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="prometheus" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.703062 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="prometheus" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.703077 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="thanos-sidecar" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.703088 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" containerName="config-reloader" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.704725 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.712025 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.723316 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.723598 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.723731 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.723857 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.724980 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.725112 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.726089 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-2wbtf" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.727791 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.727868 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810310 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810349 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810412 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810476 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dadd7e91-13f0-4ba2-9f87-ad057567a56d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810537 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dadd7e91-13f0-4ba2-9f87-ad057567a56d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810568 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810634 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-config\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810676 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810806 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810832 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh72v\" (UniqueName: \"kubernetes.io/projected/dadd7e91-13f0-4ba2-9f87-ad057567a56d-kube-api-access-lh72v\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810857 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810883 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.810925 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.852106 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jqrq2"] Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912127 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912199 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dadd7e91-13f0-4ba2-9f87-ad057567a56d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912220 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dadd7e91-13f0-4ba2-9f87-ad057567a56d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912238 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912262 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-config\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912285 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912334 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912354 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh72v\" (UniqueName: \"kubernetes.io/projected/dadd7e91-13f0-4ba2-9f87-ad057567a56d-kube-api-access-lh72v\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912377 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912399 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912426 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912448 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.912468 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.913597 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.914393 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.918124 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.919139 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.922396 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-config\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.922593 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dadd7e91-13f0-4ba2-9f87-ad057567a56d-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.926813 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.926947 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f40780962e64d13d6799d8a1c9a177793dc18d1eb26c87512c3b4aff3215b0d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.928157 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dadd7e91-13f0-4ba2-9f87-ad057567a56d-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.935151 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dadd7e91-13f0-4ba2-9f87-ad057567a56d-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.935285 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.943533 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.944223 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh72v\" (UniqueName: \"kubernetes.io/projected/dadd7e91-13f0-4ba2-9f87-ad057567a56d-kube-api-access-lh72v\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.944239 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dadd7e91-13f0-4ba2-9f87-ad057567a56d-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:04 crc kubenswrapper[4808]: I0217 16:14:04.967873 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0040876f-8578-4a75-9f3f-72945b4c5b7a\") pod \"prometheus-metric-storage-0\" (UID: \"dadd7e91-13f0-4ba2-9f87-ad057567a56d\") " pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.093242 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.163948 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2917eca2-0431-4bd6-ad96-ab8464cc4fd7" path="/var/lib/kubelet/pods/2917eca2-0431-4bd6-ad96-ab8464cc4fd7/volumes" Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.198686 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-78cc-account-create-update-k7vgl"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.302992 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jmq6n"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.311183 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8c80-account-create-update-rk4jj"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.330518 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59d8-account-create-update-5vsvx"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.339368 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-r5lfk"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.351745 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ktddg"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.354617 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kzjns"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.361156 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-a9c6-account-create-update-48vv8"] Feb 17 16:14:05 crc kubenswrapper[4808]: W0217 16:14:05.387666 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2495c4d6_8174_4b4d_9114_968620fbba31.slice/crio-e222dc202c5439197b586024c1b5930706f3a75b7b984a24eceff61c9fc9bd51 WatchSource:0}: Error finding container e222dc202c5439197b586024c1b5930706f3a75b7b984a24eceff61c9fc9bd51: Status 404 returned error can't find the container with id e222dc202c5439197b586024c1b5930706f3a75b7b984a24eceff61c9fc9bd51 Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.445061 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.451554 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jmq6n" event={"ID":"3ccecd7d-0e59-4336-a6ec-a595adbb727e","Type":"ContainerStarted","Data":"6c4dad549168fd0fe9877db14a616f977db4f3678b2cef50d4cc95501cb7ec97"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.454851 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d8-account-create-update-5vsvx" event={"ID":"02478fdd-380d-42f9-b105-c3ae86d224a8","Type":"ContainerStarted","Data":"3b5e73a2bf501307ef0912c3e2417e209a9bf79f1629e4736731809703ca6124"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.465422 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqrq2" event={"ID":"c02cbd83-d077-4812-b852-7fe9a0182b71","Type":"ContainerStarted","Data":"c6b61ad973a4d676df7b94d7816cb334b0acc481ec5fdce3038641a24a062cf0"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.465476 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqrq2" event={"ID":"c02cbd83-d077-4812-b852-7fe9a0182b71","Type":"ContainerStarted","Data":"f21b1b34203e339a6df9f3de1f3c14db9849e5fd507a49d6a22a7fc36cc73dbc"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.468026 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-78cc-account-create-update-k7vgl" event={"ID":"e183e901-16a0-43cf-9ce5-ef36da8686d1","Type":"ContainerStarted","Data":"e734ff22797424d60d75d0ff894eb99b0a93ed10a3801fb6e5b9a52dcc8e1b52"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.470216 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.470868 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8c80-account-create-update-rk4jj" event={"ID":"e5180ea6-12c0-4463-8fe5-c35ab2a15b44","Type":"ContainerStarted","Data":"1c00c6c47bb9156cd63db3c65a93373cbafb8faed7fa643611f22da349c11bb0"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.474236 4808 generic.go:334] "Generic (PLEG): container finished" podID="7377369f-b540-4b85-be05-4200c9695a41" containerID="2318a25c8a4fd490438531d7eb31b39589b2387c36e3e5db64b5abeb8c178d66" exitCode=0 Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.474282 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2jqv" event={"ID":"7377369f-b540-4b85-be05-4200c9695a41","Type":"ContainerDied","Data":"2318a25c8a4fd490438531d7eb31b39589b2387c36e3e5db64b5abeb8c178d66"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.478074 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ktddg" event={"ID":"ff670244-5344-4409-9823-6bfcf9ed274d","Type":"ContainerStarted","Data":"db36c3bbf39537df83a4da37662c8e67b4aa150cf22c4630a5ddf0b8ff0b32b4"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.479445 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-r5lfk" event={"ID":"72e328d4-94e9-42bc-ae1c-b07b01d80072","Type":"ContainerStarted","Data":"021f8a63c457f5f6931040c6e0c6166d1f2402d15c0182fc36f0fd1a25056869"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.488204 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" event={"ID":"2495c4d6-8174-4b4d-9114-968620fbba31","Type":"ContainerStarted","Data":"e222dc202c5439197b586024c1b5930706f3a75b7b984a24eceff61c9fc9bd51"} Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.488539 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-jqrq2" podStartSLOduration=3.4885275079999998 podStartE2EDuration="3.488527508s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:05.487254394 +0000 UTC m=+1209.003613467" watchObservedRunningTime="2026-02-17 16:14:05.488527508 +0000 UTC m=+1209.004886581" Feb 17 16:14:05 crc kubenswrapper[4808]: I0217 16:14:05.490280 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kzjns" event={"ID":"41c68bd6-6280-4a89-be87-4d65f06a5a4d","Type":"ContainerStarted","Data":"775b438b7af2b3cc184f6f5f5f4c39d337ef64447d3370a28378044cb5ec6a4d"} Feb 17 16:14:05 crc kubenswrapper[4808]: W0217 16:14:05.516470 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f52ebe4_f003_4d0b_8539_1d406db95b2f.slice/crio-770d1784cc30791394346c685d388c307608a4ff9fb0c6b6f3ca2670fbb6299c WatchSource:0}: Error finding container 770d1784cc30791394346c685d388c307608a4ff9fb0c6b6f3ca2670fbb6299c: Status 404 returned error can't find the container with id 770d1784cc30791394346c685d388c307608a4ff9fb0c6b6f3ca2670fbb6299c Feb 17 16:14:05 crc kubenswrapper[4808]: E0217 16:14:05.659187 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-conmon-3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc02cbd83_d077_4812_b852_7fe9a0182b71.slice/crio-c6b61ad973a4d676df7b94d7816cb334b0acc481ec5fdce3038641a24a062cf0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc02cbd83_d077_4812_b852_7fe9a0182b71.slice/crio-conmon-c6b61ad973a4d676df7b94d7816cb334b0acc481ec5fdce3038641a24a062cf0.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.501383 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dadd7e91-13f0-4ba2-9f87-ad057567a56d","Type":"ContainerStarted","Data":"ef24f9e78ce98b3bda972fae86b77ebebfb7fb39b2c1ff23acc62ed24557426c"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.506106 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"770d1784cc30791394346c685d388c307608a4ff9fb0c6b6f3ca2670fbb6299c"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.509539 4808 generic.go:334] "Generic (PLEG): container finished" podID="3ccecd7d-0e59-4336-a6ec-a595adbb727e" containerID="b727a664b9c0061ba9f01801dd0228679fbc0026b1e712729a3b0f80c6eddfb3" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.509842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jmq6n" event={"ID":"3ccecd7d-0e59-4336-a6ec-a595adbb727e","Type":"ContainerDied","Data":"b727a664b9c0061ba9f01801dd0228679fbc0026b1e712729a3b0f80c6eddfb3"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.511641 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d8-account-create-update-5vsvx" event={"ID":"02478fdd-380d-42f9-b105-c3ae86d224a8","Type":"ContainerDied","Data":"468b053d64c80baec6de3b54c4b2f477a89ae15f7b2f83e72b93e7a2a09b7e47"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.511732 4808 generic.go:334] "Generic (PLEG): container finished" podID="02478fdd-380d-42f9-b105-c3ae86d224a8" containerID="468b053d64c80baec6de3b54c4b2f477a89ae15f7b2f83e72b93e7a2a09b7e47" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.514141 4808 generic.go:334] "Generic (PLEG): container finished" podID="ff670244-5344-4409-9823-6bfcf9ed274d" containerID="f07d48d83b8d167312f75dfe2e3617926d4c7c6a17b68b60f025f9a0615ec6aa" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.514204 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ktddg" event={"ID":"ff670244-5344-4409-9823-6bfcf9ed274d","Type":"ContainerDied","Data":"f07d48d83b8d167312f75dfe2e3617926d4c7c6a17b68b60f025f9a0615ec6aa"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.516304 4808 generic.go:334] "Generic (PLEG): container finished" podID="72e328d4-94e9-42bc-ae1c-b07b01d80072" containerID="20f7389fa9f51fba5453c2a234db420d7d9f90654863c47b866a9ae0d75fd9b5" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.516354 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-r5lfk" event={"ID":"72e328d4-94e9-42bc-ae1c-b07b01d80072","Type":"ContainerDied","Data":"20f7389fa9f51fba5453c2a234db420d7d9f90654863c47b866a9ae0d75fd9b5"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.517506 4808 generic.go:334] "Generic (PLEG): container finished" podID="e183e901-16a0-43cf-9ce5-ef36da8686d1" containerID="ebb5009c36b8fd7590317bf3c492f0defedfa61fc35e3d839e79e88a3e507747" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.517543 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-78cc-account-create-update-k7vgl" event={"ID":"e183e901-16a0-43cf-9ce5-ef36da8686d1","Type":"ContainerDied","Data":"ebb5009c36b8fd7590317bf3c492f0defedfa61fc35e3d839e79e88a3e507747"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.518923 4808 generic.go:334] "Generic (PLEG): container finished" podID="e5180ea6-12c0-4463-8fe5-c35ab2a15b44" containerID="56b80ac7ee378fc8d9b7164abf8b6f6b4c7155149d6206a5a9c6aa08286e5594" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.518963 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8c80-account-create-update-rk4jj" event={"ID":"e5180ea6-12c0-4463-8fe5-c35ab2a15b44","Type":"ContainerDied","Data":"56b80ac7ee378fc8d9b7164abf8b6f6b4c7155149d6206a5a9c6aa08286e5594"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.520344 4808 generic.go:334] "Generic (PLEG): container finished" podID="c02cbd83-d077-4812-b852-7fe9a0182b71" containerID="c6b61ad973a4d676df7b94d7816cb334b0acc481ec5fdce3038641a24a062cf0" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.520380 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqrq2" event={"ID":"c02cbd83-d077-4812-b852-7fe9a0182b71","Type":"ContainerDied","Data":"c6b61ad973a4d676df7b94d7816cb334b0acc481ec5fdce3038641a24a062cf0"} Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.522279 4808 generic.go:334] "Generic (PLEG): container finished" podID="2495c4d6-8174-4b4d-9114-968620fbba31" containerID="2e2ee0ccc758be665530168176318d177d82ba65213912cccc942306aee57326" exitCode=0 Feb 17 16:14:06 crc kubenswrapper[4808]: I0217 16:14:06.522424 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" event={"ID":"2495c4d6-8174-4b4d-9114-968620fbba31","Type":"ContainerDied","Data":"2e2ee0ccc758be665530168176318d177d82ba65213912cccc942306aee57326"} Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.106376 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2jqv" Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.220643 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t2pm\" (UniqueName: \"kubernetes.io/projected/7377369f-b540-4b85-be05-4200c9695a41-kube-api-access-9t2pm\") pod \"7377369f-b540-4b85-be05-4200c9695a41\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.220708 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7377369f-b540-4b85-be05-4200c9695a41-operator-scripts\") pod \"7377369f-b540-4b85-be05-4200c9695a41\" (UID: \"7377369f-b540-4b85-be05-4200c9695a41\") " Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.221813 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7377369f-b540-4b85-be05-4200c9695a41-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7377369f-b540-4b85-be05-4200c9695a41" (UID: "7377369f-b540-4b85-be05-4200c9695a41"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.277181 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7377369f-b540-4b85-be05-4200c9695a41-kube-api-access-9t2pm" (OuterVolumeSpecName: "kube-api-access-9t2pm") pod "7377369f-b540-4b85-be05-4200c9695a41" (UID: "7377369f-b540-4b85-be05-4200c9695a41"). InnerVolumeSpecName "kube-api-access-9t2pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.323080 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t2pm\" (UniqueName: \"kubernetes.io/projected/7377369f-b540-4b85-be05-4200c9695a41-kube-api-access-9t2pm\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.323110 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7377369f-b540-4b85-be05-4200c9695a41-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.534189 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"f2c18fb16875bf72623cb846c0041b7d6ff5f8cf313c79c3b111b6ad2358eedd"} Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.535911 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f2jqv" event={"ID":"7377369f-b540-4b85-be05-4200c9695a41","Type":"ContainerDied","Data":"8d54a778f8d7c90911da4a862fcb3782ebd10a599385db7a3a37e16207cd66d3"} Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.535944 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d54a778f8d7c90911da4a862fcb3782ebd10a599385db7a3a37e16207cd66d3" Feb 17 16:14:07 crc kubenswrapper[4808]: I0217 16:14:07.536096 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f2jqv" Feb 17 16:14:08 crc kubenswrapper[4808]: I0217 16:14:08.218138 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 17 16:14:08 crc kubenswrapper[4808]: I0217 16:14:08.548670 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dadd7e91-13f0-4ba2-9f87-ad057567a56d","Type":"ContainerStarted","Data":"a537df6f55dce8af21497e898f451fd7563f1f90fb34c6f630089eb48e909606"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.568332 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jmq6n" event={"ID":"3ccecd7d-0e59-4336-a6ec-a595adbb727e","Type":"ContainerDied","Data":"6c4dad549168fd0fe9877db14a616f977db4f3678b2cef50d4cc95501cb7ec97"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.568775 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c4dad549168fd0fe9877db14a616f977db4f3678b2cef50d4cc95501cb7ec97" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.575386 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59d8-account-create-update-5vsvx" event={"ID":"02478fdd-380d-42f9-b105-c3ae86d224a8","Type":"ContainerDied","Data":"3b5e73a2bf501307ef0912c3e2417e209a9bf79f1629e4736731809703ca6124"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.575430 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b5e73a2bf501307ef0912c3e2417e209a9bf79f1629e4736731809703ca6124" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.577645 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ktddg" event={"ID":"ff670244-5344-4409-9823-6bfcf9ed274d","Type":"ContainerDied","Data":"db36c3bbf39537df83a4da37662c8e67b4aa150cf22c4630a5ddf0b8ff0b32b4"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.577690 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db36c3bbf39537df83a4da37662c8e67b4aa150cf22c4630a5ddf0b8ff0b32b4" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.582950 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" event={"ID":"2495c4d6-8174-4b4d-9114-968620fbba31","Type":"ContainerDied","Data":"e222dc202c5439197b586024c1b5930706f3a75b7b984a24eceff61c9fc9bd51"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.583025 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e222dc202c5439197b586024c1b5930706f3a75b7b984a24eceff61c9fc9bd51" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.589208 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-78cc-account-create-update-k7vgl" event={"ID":"e183e901-16a0-43cf-9ce5-ef36da8686d1","Type":"ContainerDied","Data":"e734ff22797424d60d75d0ff894eb99b0a93ed10a3801fb6e5b9a52dcc8e1b52"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.589383 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e734ff22797424d60d75d0ff894eb99b0a93ed10a3801fb6e5b9a52dcc8e1b52" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.593548 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8c80-account-create-update-rk4jj" event={"ID":"e5180ea6-12c0-4463-8fe5-c35ab2a15b44","Type":"ContainerDied","Data":"1c00c6c47bb9156cd63db3c65a93373cbafb8faed7fa643611f22da349c11bb0"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.593610 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c00c6c47bb9156cd63db3c65a93373cbafb8faed7fa643611f22da349c11bb0" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.595696 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jqrq2" event={"ID":"c02cbd83-d077-4812-b852-7fe9a0182b71","Type":"ContainerDied","Data":"f21b1b34203e339a6df9f3de1f3c14db9849e5fd507a49d6a22a7fc36cc73dbc"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.595724 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f21b1b34203e339a6df9f3de1f3c14db9849e5fd507a49d6a22a7fc36cc73dbc" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.597423 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-r5lfk" event={"ID":"72e328d4-94e9-42bc-ae1c-b07b01d80072","Type":"ContainerDied","Data":"021f8a63c457f5f6931040c6e0c6166d1f2402d15c0182fc36f0fd1a25056869"} Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.597502 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="021f8a63c457f5f6931040c6e0c6166d1f2402d15c0182fc36f0fd1a25056869" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.672852 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.681715 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.691796 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.726673 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.740195 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.771156 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.773765 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.787452 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845409 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff670244-5344-4409-9823-6bfcf9ed274d-operator-scripts\") pod \"ff670244-5344-4409-9823-6bfcf9ed274d\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845453 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c02cbd83-d077-4812-b852-7fe9a0182b71-operator-scripts\") pod \"c02cbd83-d077-4812-b852-7fe9a0182b71\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845481 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-operator-scripts\") pod \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845517 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dspfh\" (UniqueName: \"kubernetes.io/projected/ff670244-5344-4409-9823-6bfcf9ed274d-kube-api-access-dspfh\") pod \"ff670244-5344-4409-9823-6bfcf9ed274d\" (UID: \"ff670244-5344-4409-9823-6bfcf9ed274d\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845537 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rj74\" (UniqueName: \"kubernetes.io/projected/e183e901-16a0-43cf-9ce5-ef36da8686d1-kube-api-access-7rj74\") pod \"e183e901-16a0-43cf-9ce5-ef36da8686d1\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845557 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02478fdd-380d-42f9-b105-c3ae86d224a8-operator-scripts\") pod \"02478fdd-380d-42f9-b105-c3ae86d224a8\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845610 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72e328d4-94e9-42bc-ae1c-b07b01d80072-operator-scripts\") pod \"72e328d4-94e9-42bc-ae1c-b07b01d80072\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845643 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r6zf\" (UniqueName: \"kubernetes.io/projected/02478fdd-380d-42f9-b105-c3ae86d224a8-kube-api-access-6r6zf\") pod \"02478fdd-380d-42f9-b105-c3ae86d224a8\" (UID: \"02478fdd-380d-42f9-b105-c3ae86d224a8\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845675 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e183e901-16a0-43cf-9ce5-ef36da8686d1-operator-scripts\") pod \"e183e901-16a0-43cf-9ce5-ef36da8686d1\" (UID: \"e183e901-16a0-43cf-9ce5-ef36da8686d1\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845710 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx7rg\" (UniqueName: \"kubernetes.io/projected/72e328d4-94e9-42bc-ae1c-b07b01d80072-kube-api-access-sx7rg\") pod \"72e328d4-94e9-42bc-ae1c-b07b01d80072\" (UID: \"72e328d4-94e9-42bc-ae1c-b07b01d80072\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845731 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj2f8\" (UniqueName: \"kubernetes.io/projected/c02cbd83-d077-4812-b852-7fe9a0182b71-kube-api-access-xj2f8\") pod \"c02cbd83-d077-4812-b852-7fe9a0182b71\" (UID: \"c02cbd83-d077-4812-b852-7fe9a0182b71\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.845761 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8xgn\" (UniqueName: \"kubernetes.io/projected/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-kube-api-access-j8xgn\") pod \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\" (UID: \"e5180ea6-12c0-4463-8fe5-c35ab2a15b44\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.846878 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72e328d4-94e9-42bc-ae1c-b07b01d80072-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72e328d4-94e9-42bc-ae1c-b07b01d80072" (UID: "72e328d4-94e9-42bc-ae1c-b07b01d80072"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.846911 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5180ea6-12c0-4463-8fe5-c35ab2a15b44" (UID: "e5180ea6-12c0-4463-8fe5-c35ab2a15b44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.847354 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff670244-5344-4409-9823-6bfcf9ed274d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff670244-5344-4409-9823-6bfcf9ed274d" (UID: "ff670244-5344-4409-9823-6bfcf9ed274d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.847453 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c02cbd83-d077-4812-b852-7fe9a0182b71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c02cbd83-d077-4812-b852-7fe9a0182b71" (UID: "c02cbd83-d077-4812-b852-7fe9a0182b71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.848611 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e183e901-16a0-43cf-9ce5-ef36da8686d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e183e901-16a0-43cf-9ce5-ef36da8686d1" (UID: "e183e901-16a0-43cf-9ce5-ef36da8686d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.849643 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02478fdd-380d-42f9-b105-c3ae86d224a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "02478fdd-380d-42f9-b105-c3ae86d224a8" (UID: "02478fdd-380d-42f9-b105-c3ae86d224a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.856181 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-kube-api-access-j8xgn" (OuterVolumeSpecName: "kube-api-access-j8xgn") pod "e5180ea6-12c0-4463-8fe5-c35ab2a15b44" (UID: "e5180ea6-12c0-4463-8fe5-c35ab2a15b44"). InnerVolumeSpecName "kube-api-access-j8xgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.857073 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e328d4-94e9-42bc-ae1c-b07b01d80072-kube-api-access-sx7rg" (OuterVolumeSpecName: "kube-api-access-sx7rg") pod "72e328d4-94e9-42bc-ae1c-b07b01d80072" (UID: "72e328d4-94e9-42bc-ae1c-b07b01d80072"). InnerVolumeSpecName "kube-api-access-sx7rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.859340 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02478fdd-380d-42f9-b105-c3ae86d224a8-kube-api-access-6r6zf" (OuterVolumeSpecName: "kube-api-access-6r6zf") pod "02478fdd-380d-42f9-b105-c3ae86d224a8" (UID: "02478fdd-380d-42f9-b105-c3ae86d224a8"). InnerVolumeSpecName "kube-api-access-6r6zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.859692 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c02cbd83-d077-4812-b852-7fe9a0182b71-kube-api-access-xj2f8" (OuterVolumeSpecName: "kube-api-access-xj2f8") pod "c02cbd83-d077-4812-b852-7fe9a0182b71" (UID: "c02cbd83-d077-4812-b852-7fe9a0182b71"). InnerVolumeSpecName "kube-api-access-xj2f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.860295 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e183e901-16a0-43cf-9ce5-ef36da8686d1-kube-api-access-7rj74" (OuterVolumeSpecName: "kube-api-access-7rj74") pod "e183e901-16a0-43cf-9ce5-ef36da8686d1" (UID: "e183e901-16a0-43cf-9ce5-ef36da8686d1"). InnerVolumeSpecName "kube-api-access-7rj74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.860675 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff670244-5344-4409-9823-6bfcf9ed274d-kube-api-access-dspfh" (OuterVolumeSpecName: "kube-api-access-dspfh") pod "ff670244-5344-4409-9823-6bfcf9ed274d" (UID: "ff670244-5344-4409-9823-6bfcf9ed274d"). InnerVolumeSpecName "kube-api-access-dspfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.947613 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx65n\" (UniqueName: \"kubernetes.io/projected/3ccecd7d-0e59-4336-a6ec-a595adbb727e-kube-api-access-mx65n\") pod \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.947696 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dqw4\" (UniqueName: \"kubernetes.io/projected/2495c4d6-8174-4b4d-9114-968620fbba31-kube-api-access-5dqw4\") pod \"2495c4d6-8174-4b4d-9114-968620fbba31\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.947742 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ccecd7d-0e59-4336-a6ec-a595adbb727e-operator-scripts\") pod \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\" (UID: \"3ccecd7d-0e59-4336-a6ec-a595adbb727e\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.947885 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495c4d6-8174-4b4d-9114-968620fbba31-operator-scripts\") pod \"2495c4d6-8174-4b4d-9114-968620fbba31\" (UID: \"2495c4d6-8174-4b4d-9114-968620fbba31\") " Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.948325 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ccecd7d-0e59-4336-a6ec-a595adbb727e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3ccecd7d-0e59-4336-a6ec-a595adbb727e" (UID: "3ccecd7d-0e59-4336-a6ec-a595adbb727e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.948447 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2495c4d6-8174-4b4d-9114-968620fbba31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2495c4d6-8174-4b4d-9114-968620fbba31" (UID: "2495c4d6-8174-4b4d-9114-968620fbba31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.948930 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff670244-5344-4409-9823-6bfcf9ed274d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.948958 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c02cbd83-d077-4812-b852-7fe9a0182b71-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.948971 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.948984 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3ccecd7d-0e59-4336-a6ec-a595adbb727e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.948998 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dspfh\" (UniqueName: \"kubernetes.io/projected/ff670244-5344-4409-9823-6bfcf9ed274d-kube-api-access-dspfh\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949012 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rj74\" (UniqueName: \"kubernetes.io/projected/e183e901-16a0-43cf-9ce5-ef36da8686d1-kube-api-access-7rj74\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949024 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/02478fdd-380d-42f9-b105-c3ae86d224a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949036 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72e328d4-94e9-42bc-ae1c-b07b01d80072-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949048 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r6zf\" (UniqueName: \"kubernetes.io/projected/02478fdd-380d-42f9-b105-c3ae86d224a8-kube-api-access-6r6zf\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949062 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e183e901-16a0-43cf-9ce5-ef36da8686d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949075 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx7rg\" (UniqueName: \"kubernetes.io/projected/72e328d4-94e9-42bc-ae1c-b07b01d80072-kube-api-access-sx7rg\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949087 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj2f8\" (UniqueName: \"kubernetes.io/projected/c02cbd83-d077-4812-b852-7fe9a0182b71-kube-api-access-xj2f8\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949099 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2495c4d6-8174-4b4d-9114-968620fbba31-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.949110 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8xgn\" (UniqueName: \"kubernetes.io/projected/e5180ea6-12c0-4463-8fe5-c35ab2a15b44-kube-api-access-j8xgn\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.951762 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ccecd7d-0e59-4336-a6ec-a595adbb727e-kube-api-access-mx65n" (OuterVolumeSpecName: "kube-api-access-mx65n") pod "3ccecd7d-0e59-4336-a6ec-a595adbb727e" (UID: "3ccecd7d-0e59-4336-a6ec-a595adbb727e"). InnerVolumeSpecName "kube-api-access-mx65n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:10 crc kubenswrapper[4808]: I0217 16:14:10.952682 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2495c4d6-8174-4b4d-9114-968620fbba31-kube-api-access-5dqw4" (OuterVolumeSpecName: "kube-api-access-5dqw4") pod "2495c4d6-8174-4b4d-9114-968620fbba31" (UID: "2495c4d6-8174-4b4d-9114-968620fbba31"). InnerVolumeSpecName "kube-api-access-5dqw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.050467 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx65n\" (UniqueName: \"kubernetes.io/projected/3ccecd7d-0e59-4336-a6ec-a595adbb727e-kube-api-access-mx65n\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.050494 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dqw4\" (UniqueName: \"kubernetes.io/projected/2495c4d6-8174-4b4d-9114-968620fbba31-kube-api-access-5dqw4\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.610767 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kzjns" event={"ID":"41c68bd6-6280-4a89-be87-4d65f06a5a4d","Type":"ContainerStarted","Data":"1cff9cf3eadd10df7be967e33cf8e5d78b57505ed6a912803f00cfd78dd0e31c"} Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.614482 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jqrq2" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.614613 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-78cc-account-create-update-k7vgl" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.614662 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"491643042c0152f38129738d60fe00177c88399b512e5240d03ab9d4b0d4ece7"} Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.614675 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-a9c6-account-create-update-48vv8" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.614692 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"59cbb05f824a3ef841fb687bc9090d82b0e7e6d58f0798feee4f22da8aef9866"} Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.614713 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"27b6ab5d4d28f4a5b479a7551ba71f2fa6d495478c1afa92ecb29a9b87576d4a"} Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.614971 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jmq6n" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.615040 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8c80-account-create-update-rk4jj" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.615066 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-r5lfk" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.615095 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59d8-account-create-update-5vsvx" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.615125 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ktddg" Feb 17 16:14:11 crc kubenswrapper[4808]: I0217 16:14:11.642109 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-kzjns" podStartSLOduration=4.504135956 podStartE2EDuration="9.642085208s" podCreationTimestamp="2026-02-17 16:14:02 +0000 UTC" firstStartedPulling="2026-02-17 16:14:05.401891083 +0000 UTC m=+1208.918250156" lastFinishedPulling="2026-02-17 16:14:10.539840325 +0000 UTC m=+1214.056199408" observedRunningTime="2026-02-17 16:14:11.629323223 +0000 UTC m=+1215.145682326" watchObservedRunningTime="2026-02-17 16:14:11.642085208 +0000 UTC m=+1215.158444301" Feb 17 16:14:12 crc kubenswrapper[4808]: I0217 16:14:12.626766 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"b2a7c5ffc9b4e3884f38f87b5b2eda9b703b71f1e4c9a4c9c858de2db7371020"} Feb 17 16:14:13 crc kubenswrapper[4808]: I0217 16:14:13.654684 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"df9f68a5854b4f1558a0524fced4d38e13660337302c92bef5248d815dfd21c4"} Feb 17 16:14:13 crc kubenswrapper[4808]: I0217 16:14:13.655059 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"e48beb2c358671a6c7db7f0ee8e9fb94bf4431513f6021181e53bc794008621a"} Feb 17 16:14:13 crc kubenswrapper[4808]: I0217 16:14:13.655085 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"6c3bb6aea8cbb30b8a9eae461c406068fbf442b0f36daa227bb7270c104f357f"} Feb 17 16:14:15 crc kubenswrapper[4808]: I0217 16:14:15.675612 4808 generic.go:334] "Generic (PLEG): container finished" podID="dadd7e91-13f0-4ba2-9f87-ad057567a56d" containerID="a537df6f55dce8af21497e898f451fd7563f1f90fb34c6f630089eb48e909606" exitCode=0 Feb 17 16:14:15 crc kubenswrapper[4808]: I0217 16:14:15.675737 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dadd7e91-13f0-4ba2-9f87-ad057567a56d","Type":"ContainerDied","Data":"a537df6f55dce8af21497e898f451fd7563f1f90fb34c6f630089eb48e909606"} Feb 17 16:14:15 crc kubenswrapper[4808]: I0217 16:14:15.683243 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"7b4da3c810403b3cdb3db26c8c3246fd68acd6115ee8aeff40464c7a3ebc9c97"} Feb 17 16:14:15 crc kubenswrapper[4808]: I0217 16:14:15.683282 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"e2dcb95417c1f379bf96d646de9cbf2961f747d1dd658fac1841cf7282542ac5"} Feb 17 16:14:15 crc kubenswrapper[4808]: E0217 16:14:15.952786 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-conmon-3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.697634 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dadd7e91-13f0-4ba2-9f87-ad057567a56d","Type":"ContainerStarted","Data":"e60078b07b0caf9d38ff7dd0a579724180a348b6373ed99a735f3a21becd9e5f"} Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.699792 4808 generic.go:334] "Generic (PLEG): container finished" podID="41c68bd6-6280-4a89-be87-4d65f06a5a4d" containerID="1cff9cf3eadd10df7be967e33cf8e5d78b57505ed6a912803f00cfd78dd0e31c" exitCode=0 Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.699844 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kzjns" event={"ID":"41c68bd6-6280-4a89-be87-4d65f06a5a4d","Type":"ContainerDied","Data":"1cff9cf3eadd10df7be967e33cf8e5d78b57505ed6a912803f00cfd78dd0e31c"} Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.711973 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"6595237b391e67ed09cb1881b7b4f03893623f863075fed0e65248cf65ce7c4b"} Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.712025 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"ecdc1158d969e6da45456366a446f550a2b7d52f06dc7596569b8baa90a8a564"} Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.712037 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"a034788c022750136ff34bf82590c806a6e424137889c25f3dbef22d52899426"} Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.712047 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"f1073016d5f2f6b5d054bce37c43a6a88228df020ecfd931154f637eafef3d55"} Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.712057 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8f52ebe4-f003-4d0b-8539-1d406db95b2f","Type":"ContainerStarted","Data":"d1a018be7a22a09cf47a08d07b315fda3ffd60d6f30745e5dd18c23d950530e1"} Feb 17 16:14:16 crc kubenswrapper[4808]: I0217 16:14:16.753465 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.133134597 podStartE2EDuration="49.753444821s" podCreationTimestamp="2026-02-17 16:13:27 +0000 UTC" firstStartedPulling="2026-02-17 16:14:05.519878527 +0000 UTC m=+1209.036237600" lastFinishedPulling="2026-02-17 16:14:15.140188711 +0000 UTC m=+1218.656547824" observedRunningTime="2026-02-17 16:14:16.749972256 +0000 UTC m=+1220.266331339" watchObservedRunningTime="2026-02-17 16:14:16.753444821 +0000 UTC m=+1220.269803904" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.025820 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-5dcwb"] Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026297 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e183e901-16a0-43cf-9ce5-ef36da8686d1" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026323 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e183e901-16a0-43cf-9ce5-ef36da8686d1" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026341 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2495c4d6-8174-4b4d-9114-968620fbba31" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026350 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2495c4d6-8174-4b4d-9114-968620fbba31" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026359 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e328d4-94e9-42bc-ae1c-b07b01d80072" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026367 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e328d4-94e9-42bc-ae1c-b07b01d80072" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026381 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7377369f-b540-4b85-be05-4200c9695a41" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026388 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7377369f-b540-4b85-be05-4200c9695a41" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026398 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02cbd83-d077-4812-b852-7fe9a0182b71" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026404 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02cbd83-d077-4812-b852-7fe9a0182b71" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026421 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02478fdd-380d-42f9-b105-c3ae86d224a8" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026428 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="02478fdd-380d-42f9-b105-c3ae86d224a8" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026442 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ccecd7d-0e59-4336-a6ec-a595adbb727e" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026450 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ccecd7d-0e59-4336-a6ec-a595adbb727e" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026475 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5180ea6-12c0-4463-8fe5-c35ab2a15b44" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026484 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5180ea6-12c0-4463-8fe5-c35ab2a15b44" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: E0217 16:14:17.026494 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff670244-5344-4409-9823-6bfcf9ed274d" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.026502 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff670244-5344-4409-9823-6bfcf9ed274d" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027511 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff670244-5344-4409-9823-6bfcf9ed274d" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027544 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e328d4-94e9-42bc-ae1c-b07b01d80072" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027559 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02cbd83-d077-4812-b852-7fe9a0182b71" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027590 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ccecd7d-0e59-4336-a6ec-a595adbb727e" containerName="mariadb-database-create" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027601 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2495c4d6-8174-4b4d-9114-968620fbba31" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027614 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="02478fdd-380d-42f9-b105-c3ae86d224a8" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027626 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e183e901-16a0-43cf-9ce5-ef36da8686d1" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027638 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5180ea6-12c0-4463-8fe5-c35ab2a15b44" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.027653 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7377369f-b540-4b85-be05-4200c9695a41" containerName="mariadb-account-create-update" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.028969 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.034841 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.035037 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-5dcwb"] Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.173240 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.173312 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-svc\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.173348 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.173471 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-config\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.173631 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcsqp\" (UniqueName: \"kubernetes.io/projected/75b951c6-37fc-4757-bafd-ef3647e3b701-kube-api-access-rcsqp\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.173798 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.276605 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.277316 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-svc\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.277351 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.277815 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-config\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.277872 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcsqp\" (UniqueName: \"kubernetes.io/projected/75b951c6-37fc-4757-bafd-ef3647e3b701-kube-api-access-rcsqp\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.277956 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.278012 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.278362 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.279124 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-config\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.279566 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.280004 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-svc\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.301334 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcsqp\" (UniqueName: \"kubernetes.io/projected/75b951c6-37fc-4757-bafd-ef3647e3b701-kube-api-access-rcsqp\") pod \"dnsmasq-dns-764c5664d7-5dcwb\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.348215 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:17 crc kubenswrapper[4808]: I0217 16:14:17.822168 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-5dcwb"] Feb 17 16:14:17 crc kubenswrapper[4808]: W0217 16:14:17.844718 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75b951c6_37fc_4757_bafd_ef3647e3b701.slice/crio-1b646decde62c27e860d00c8b40a1f84672ace9f752cc2f00a47cf4ad3e6b50e WatchSource:0}: Error finding container 1b646decde62c27e860d00c8b40a1f84672ace9f752cc2f00a47cf4ad3e6b50e: Status 404 returned error can't find the container with id 1b646decde62c27e860d00c8b40a1f84672ace9f752cc2f00a47cf4ad3e6b50e Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.064829 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.193301 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6rjq\" (UniqueName: \"kubernetes.io/projected/41c68bd6-6280-4a89-be87-4d65f06a5a4d-kube-api-access-f6rjq\") pod \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.193631 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-config-data\") pod \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.194423 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-combined-ca-bundle\") pod \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\" (UID: \"41c68bd6-6280-4a89-be87-4d65f06a5a4d\") " Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.198509 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c68bd6-6280-4a89-be87-4d65f06a5a4d-kube-api-access-f6rjq" (OuterVolumeSpecName: "kube-api-access-f6rjq") pod "41c68bd6-6280-4a89-be87-4d65f06a5a4d" (UID: "41c68bd6-6280-4a89-be87-4d65f06a5a4d"). InnerVolumeSpecName "kube-api-access-f6rjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.230107 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41c68bd6-6280-4a89-be87-4d65f06a5a4d" (UID: "41c68bd6-6280-4a89-be87-4d65f06a5a4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.252191 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-config-data" (OuterVolumeSpecName: "config-data") pod "41c68bd6-6280-4a89-be87-4d65f06a5a4d" (UID: "41c68bd6-6280-4a89-be87-4d65f06a5a4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.297133 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.297165 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c68bd6-6280-4a89-be87-4d65f06a5a4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.297181 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6rjq\" (UniqueName: \"kubernetes.io/projected/41c68bd6-6280-4a89-be87-4d65f06a5a4d-kube-api-access-f6rjq\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.747191 4808 generic.go:334] "Generic (PLEG): container finished" podID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerID="6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109" exitCode=0 Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.747706 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" event={"ID":"75b951c6-37fc-4757-bafd-ef3647e3b701","Type":"ContainerDied","Data":"6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109"} Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.747787 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" event={"ID":"75b951c6-37fc-4757-bafd-ef3647e3b701","Type":"ContainerStarted","Data":"1b646decde62c27e860d00c8b40a1f84672ace9f752cc2f00a47cf4ad3e6b50e"} Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.752849 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4mdzt" event={"ID":"e4002815-8dd4-4668-bea7-0d54bdaa4dd6","Type":"ContainerStarted","Data":"be39fd3404d415b22eff1029ee90e816412441ea7651c949f01bcda15108e232"} Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.759165 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kzjns" event={"ID":"41c68bd6-6280-4a89-be87-4d65f06a5a4d","Type":"ContainerDied","Data":"775b438b7af2b3cc184f6f5f5f4c39d337ef64447d3370a28378044cb5ec6a4d"} Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.759222 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="775b438b7af2b3cc184f6f5f5f4c39d337ef64447d3370a28378044cb5ec6a4d" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.759301 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kzjns" Feb 17 16:14:18 crc kubenswrapper[4808]: I0217 16:14:18.806407 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-4mdzt" podStartSLOduration=2.945918163 podStartE2EDuration="33.806386984s" podCreationTimestamp="2026-02-17 16:13:45 +0000 UTC" firstStartedPulling="2026-02-17 16:13:46.852862329 +0000 UTC m=+1190.369221402" lastFinishedPulling="2026-02-17 16:14:17.71333115 +0000 UTC m=+1221.229690223" observedRunningTime="2026-02-17 16:14:18.805197663 +0000 UTC m=+1222.321556736" watchObservedRunningTime="2026-02-17 16:14:18.806386984 +0000 UTC m=+1222.322746067" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.018502 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-5dcwb"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.038878 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-p2fwj"] Feb 17 16:14:19 crc kubenswrapper[4808]: E0217 16:14:19.045907 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c68bd6-6280-4a89-be87-4d65f06a5a4d" containerName="keystone-db-sync" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.046028 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c68bd6-6280-4a89-be87-4d65f06a5a4d" containerName="keystone-db-sync" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.046252 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c68bd6-6280-4a89-be87-4d65f06a5a4d" containerName="keystone-db-sync" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.050228 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.058922 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.059808 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.059828 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6x2tm" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.059940 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.060458 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.081813 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p2fwj"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.112601 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kpwh4"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.114030 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-scripts\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.114124 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nklnb\" (UniqueName: \"kubernetes.io/projected/4e39a33f-5d00-4171-bf63-6b12226901d3-kube-api-access-nklnb\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.114219 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-combined-ca-bundle\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.114242 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-credential-keys\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.114282 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-config-data\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.114360 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-fernet-keys\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.114950 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.166884 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kpwh4"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216257 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-combined-ca-bundle\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216314 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-credential-keys\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216337 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-config-data\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216367 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-config\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216385 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-svc\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216400 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-fernet-keys\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216418 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216450 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216488 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-scripts\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216535 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nklnb\" (UniqueName: \"kubernetes.io/projected/4e39a33f-5d00-4171-bf63-6b12226901d3-kube-api-access-nklnb\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216552 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4mdl\" (UniqueName: \"kubernetes.io/projected/4cdfa661-fa28-48be-b416-f2e69927fc9b-kube-api-access-b4mdl\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.216597 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.227337 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-config-data\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.235903 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-scripts\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.241676 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-combined-ca-bundle\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.243021 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-credential-keys\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.247456 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-fernet-keys\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.289247 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nklnb\" (UniqueName: \"kubernetes.io/projected/4e39a33f-5d00-4171-bf63-6b12226901d3-kube-api-access-nklnb\") pod \"keystone-bootstrap-p2fwj\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.302655 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-jcqjf"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.303878 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.309767 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.310007 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bqdgs" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.310162 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.318627 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-jskwv"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.319973 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.320633 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-config\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.320679 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-svc\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.320706 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.320754 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.320852 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4mdl\" (UniqueName: \"kubernetes.io/projected/4cdfa661-fa28-48be-b416-f2e69927fc9b-kube-api-access-b4mdl\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.320894 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.321989 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.322627 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.322902 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.323252 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-config\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.323940 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-svc\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.332009 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.332222 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-89rvs" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.332343 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.344655 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jcqjf"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.364222 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jskwv"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.379230 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4mdl\" (UniqueName: \"kubernetes.io/projected/4cdfa661-fa28-48be-b416-f2e69927fc9b-kube-api-access-b4mdl\") pod \"dnsmasq-dns-5959f8865f-kpwh4\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.390778 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432468 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-config\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432525 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-db-sync-config-data\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432552 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432611 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-scripts\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432629 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvwj\" (UniqueName: \"kubernetes.io/projected/436b0400-6c82-450b-9505-61bf124b5db5-kube-api-access-8zvwj\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432684 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-combined-ca-bundle\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432706 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mc46\" (UniqueName: \"kubernetes.io/projected/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-kube-api-access-9mc46\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432725 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-etc-machine-id\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.432805 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-config-data\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.446252 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.471781 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.474071 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.480388 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.480726 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.492715 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.511891 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kpwh4"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.534640 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-rwld8"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.535838 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.538500 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-26x5l" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539473 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-config-data\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539512 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-config\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539555 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-db-sync-config-data\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539598 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539619 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-scripts\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539659 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zvwj\" (UniqueName: \"kubernetes.io/projected/436b0400-6c82-450b-9505-61bf124b5db5-kube-api-access-8zvwj\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539699 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-combined-ca-bundle\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539722 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mc46\" (UniqueName: \"kubernetes.io/projected/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-kube-api-access-9mc46\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539742 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-etc-machine-id\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.539866 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-etc-machine-id\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.546181 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.549130 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-config\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.556472 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.558828 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-config-data\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.559089 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-combined-ca-bundle\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.578180 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mc46\" (UniqueName: \"kubernetes.io/projected/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-kube-api-access-9mc46\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.596206 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zvwj\" (UniqueName: \"kubernetes.io/projected/436b0400-6c82-450b-9505-61bf124b5db5-kube-api-access-8zvwj\") pod \"neutron-db-sync-jskwv\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.596277 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rwld8"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.604186 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-db-sync-config-data\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.605495 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-scripts\") pod \"cinder-db-sync-jcqjf\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.624300 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.682887 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bbhtn"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.705069 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799056 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-db-sync-config-data\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799414 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-scripts\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799440 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799512 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-log-httpd\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799620 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-config-data\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799824 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5gdz\" (UniqueName: \"kubernetes.io/projected/ce9fba55-1b70-4d39-a052-bff96bd8e93a-kube-api-access-j5gdz\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799854 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799885 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zvc8\" (UniqueName: \"kubernetes.io/projected/5bf4d932-664a-46c6-bec5-f2b70950c824-kube-api-access-2zvc8\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799938 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-combined-ca-bundle\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.799967 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-run-httpd\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.801264 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.843671 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-d52vg"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.845360 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.850485 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-p4pcv" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.850736 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.850844 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.865014 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bbhtn"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.879731 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dadd7e91-13f0-4ba2-9f87-ad057567a56d","Type":"ContainerStarted","Data":"242e1b17b83477623f3db53de91633b1733bef1f427e3e630e934f7135ecb6d2"} Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.879823 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"dadd7e91-13f0-4ba2-9f87-ad057567a56d","Type":"ContainerStarted","Data":"c94cacbebe726d53c1cfb7a9941c3178ffe9137486d282c308ad8f46f0586896"} Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.904971 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" event={"ID":"75b951c6-37fc-4757-bafd-ef3647e3b701","Type":"ContainerStarted","Data":"5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad"} Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.905137 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" podUID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerName="dnsmasq-dns" containerID="cri-o://5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad" gracePeriod=10 Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.906006 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908270 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bzxr\" (UniqueName: \"kubernetes.io/projected/b7820c3c-fe38-46dd-906a-498a579d0805-kube-api-access-7bzxr\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908335 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-config-data\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908395 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-config-data\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908419 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908437 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7820c3c-fe38-46dd-906a-498a579d0805-logs\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908463 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5gdz\" (UniqueName: \"kubernetes.io/projected/ce9fba55-1b70-4d39-a052-bff96bd8e93a-kube-api-access-j5gdz\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908480 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908498 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908525 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zvc8\" (UniqueName: \"kubernetes.io/projected/5bf4d932-664a-46c6-bec5-f2b70950c824-kube-api-access-2zvc8\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908543 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-config\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908590 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz8lw\" (UniqueName: \"kubernetes.io/projected/ac763412-39e7-40d0-892a-57ac801af2bb-kube-api-access-zz8lw\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908612 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-combined-ca-bundle\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.908629 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912127 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-combined-ca-bundle\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912156 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-run-httpd\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912210 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-db-sync-config-data\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912235 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-scripts\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912275 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-scripts\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912307 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912354 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912383 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-log-httpd\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.912923 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-log-httpd\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.913464 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-wdrmd"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.914114 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-run-httpd\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.915115 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.918883 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-db-sync-config-data\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.919049 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.925041 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.925662 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-kqv9d" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.928912 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.929312 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.936867 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-config-data\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.943335 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.944893 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zvc8\" (UniqueName: \"kubernetes.io/projected/5bf4d932-664a-46c6-bec5-f2b70950c824-kube-api-access-2zvc8\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.950264 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5gdz\" (UniqueName: \"kubernetes.io/projected/ce9fba55-1b70-4d39-a052-bff96bd8e93a-kube-api-access-j5gdz\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.950438 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-scripts\") pod \"ceilometer-0\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " pod="openstack/ceilometer-0" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.951398 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-combined-ca-bundle\") pod \"barbican-db-sync-rwld8\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.951450 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-d52vg"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.964742 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-wdrmd"] Feb 17 16:14:19 crc kubenswrapper[4808]: I0217 16:14:19.988624 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" podStartSLOduration=2.988603103 podStartE2EDuration="2.988603103s" podCreationTimestamp="2026-02-17 16:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:19.929633997 +0000 UTC m=+1223.445993080" watchObservedRunningTime="2026-02-17 16:14:19.988603103 +0000 UTC m=+1223.504962176" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024275 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bzxr\" (UniqueName: \"kubernetes.io/projected/b7820c3c-fe38-46dd-906a-498a579d0805-kube-api-access-7bzxr\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024484 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-scripts\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024508 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-certs\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024601 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-combined-ca-bundle\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024647 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-config-data\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024686 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-config-data\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024720 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024741 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7820c3c-fe38-46dd-906a-498a579d0805-logs\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024814 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024853 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-config\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024876 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jmms\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-kube-api-access-5jmms\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024913 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz8lw\" (UniqueName: \"kubernetes.io/projected/ac763412-39e7-40d0-892a-57ac801af2bb-kube-api-access-zz8lw\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024954 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.024982 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-combined-ca-bundle\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.025055 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-scripts\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.025096 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.029205 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7820c3c-fe38-46dd-906a-498a579d0805-logs\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.029895 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-config\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.030340 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.030419 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.031001 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.031720 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.035533 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-config-data\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.035703 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-combined-ca-bundle\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.043229 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-scripts\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.043394 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bzxr\" (UniqueName: \"kubernetes.io/projected/b7820c3c-fe38-46dd-906a-498a579d0805-kube-api-access-7bzxr\") pod \"placement-db-sync-d52vg\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.045482 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz8lw\" (UniqueName: \"kubernetes.io/projected/ac763412-39e7-40d0-892a-57ac801af2bb-kube-api-access-zz8lw\") pod \"dnsmasq-dns-58dd9ff6bc-bbhtn\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.127764 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jmms\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-kube-api-access-5jmms\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.127942 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-certs\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.127968 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-scripts\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.128005 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-combined-ca-bundle\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.128042 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-config-data\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.132122 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-certs\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.132896 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-config-data\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.136164 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-combined-ca-bundle\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.137478 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-scripts\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.153498 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jmms\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-kube-api-access-5jmms\") pod \"cloudkitty-db-sync-wdrmd\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.174037 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.190985 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rwld8" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.221318 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.237514 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d52vg" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.239703 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p2fwj"] Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.257010 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.344941 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kpwh4"] Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.446915 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-jcqjf"] Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.569198 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jskwv"] Feb 17 16:14:20 crc kubenswrapper[4808]: W0217 16:14:20.580052 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0cc3be3_7aa7_4384_97ed_1ec7bf75f026.slice/crio-722abc1b9b4878938b1d63e6058f446e8ab4a259fcfed886248ba3ca8f6e13fc WatchSource:0}: Error finding container 722abc1b9b4878938b1d63e6058f446e8ab4a259fcfed886248ba3ca8f6e13fc: Status 404 returned error can't find the container with id 722abc1b9b4878938b1d63e6058f446e8ab4a259fcfed886248ba3ca8f6e13fc Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.699957 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.753263 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcsqp\" (UniqueName: \"kubernetes.io/projected/75b951c6-37fc-4757-bafd-ef3647e3b701-kube-api-access-rcsqp\") pod \"75b951c6-37fc-4757-bafd-ef3647e3b701\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.753675 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-sb\") pod \"75b951c6-37fc-4757-bafd-ef3647e3b701\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.753865 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-svc\") pod \"75b951c6-37fc-4757-bafd-ef3647e3b701\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.754025 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-config\") pod \"75b951c6-37fc-4757-bafd-ef3647e3b701\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.754139 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-swift-storage-0\") pod \"75b951c6-37fc-4757-bafd-ef3647e3b701\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.754247 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-nb\") pod \"75b951c6-37fc-4757-bafd-ef3647e3b701\" (UID: \"75b951c6-37fc-4757-bafd-ef3647e3b701\") " Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.755913 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.775632 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b951c6-37fc-4757-bafd-ef3647e3b701-kube-api-access-rcsqp" (OuterVolumeSpecName: "kube-api-access-rcsqp") pod "75b951c6-37fc-4757-bafd-ef3647e3b701" (UID: "75b951c6-37fc-4757-bafd-ef3647e3b701"). InnerVolumeSpecName "kube-api-access-rcsqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:20 crc kubenswrapper[4808]: W0217 16:14:20.783758 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce9fba55_1b70_4d39_a052_bff96bd8e93a.slice/crio-722643afae2a4e200c6ad3b18d935dcb7ed1baa99b37d21d611a112237864c00 WatchSource:0}: Error finding container 722643afae2a4e200c6ad3b18d935dcb7ed1baa99b37d21d611a112237864c00: Status 404 returned error can't find the container with id 722643afae2a4e200c6ad3b18d935dcb7ed1baa99b37d21d611a112237864c00 Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.857084 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcsqp\" (UniqueName: \"kubernetes.io/projected/75b951c6-37fc-4757-bafd-ef3647e3b701-kube-api-access-rcsqp\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.910473 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rwld8"] Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.917230 4808 generic.go:334] "Generic (PLEG): container finished" podID="4cdfa661-fa28-48be-b416-f2e69927fc9b" containerID="29684d96c1943280e84c76de58aee0550b74d290c562f5eaa5511b6310aa658b" exitCode=0 Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.917298 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" event={"ID":"4cdfa661-fa28-48be-b416-f2e69927fc9b","Type":"ContainerDied","Data":"29684d96c1943280e84c76de58aee0550b74d290c562f5eaa5511b6310aa658b"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.917339 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" event={"ID":"4cdfa661-fa28-48be-b416-f2e69927fc9b","Type":"ContainerStarted","Data":"d136669bdec3d3e1777e3899ee2de7762492e7209be6f5909cc9b03217da4323"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.923459 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bbhtn"] Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.929080 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerStarted","Data":"722643afae2a4e200c6ad3b18d935dcb7ed1baa99b37d21d611a112237864c00"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.932603 4808 generic.go:334] "Generic (PLEG): container finished" podID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerID="5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad" exitCode=0 Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.932667 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" event={"ID":"75b951c6-37fc-4757-bafd-ef3647e3b701","Type":"ContainerDied","Data":"5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.932692 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" event={"ID":"75b951c6-37fc-4757-bafd-ef3647e3b701","Type":"ContainerDied","Data":"1b646decde62c27e860d00c8b40a1f84672ace9f752cc2f00a47cf4ad3e6b50e"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.932709 4808 scope.go:117] "RemoveContainer" containerID="5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.932831 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-5dcwb" Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.937424 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p2fwj" event={"ID":"4e39a33f-5d00-4171-bf63-6b12226901d3","Type":"ContainerStarted","Data":"17116d89192a8613360b83b9abc0d23bc6d3cc17099f32067b22d7c7d3c6494e"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.940334 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-d52vg"] Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.948016 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jskwv" event={"ID":"436b0400-6c82-450b-9505-61bf124b5db5","Type":"ContainerStarted","Data":"5717dd2ef8af55d59bb6a6c87c756928ce372bb105a7380fa60e88c0fb60d552"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.955951 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jcqjf" event={"ID":"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026","Type":"ContainerStarted","Data":"722abc1b9b4878938b1d63e6058f446e8ab4a259fcfed886248ba3ca8f6e13fc"} Feb 17 16:14:20 crc kubenswrapper[4808]: I0217 16:14:20.991027 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.991004294 podStartE2EDuration="16.991004294s" podCreationTimestamp="2026-02-17 16:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:20.986073361 +0000 UTC m=+1224.502432434" watchObservedRunningTime="2026-02-17 16:14:20.991004294 +0000 UTC m=+1224.507363377" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.010625 4808 scope.go:117] "RemoveContainer" containerID="6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.017508 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "75b951c6-37fc-4757-bafd-ef3647e3b701" (UID: "75b951c6-37fc-4757-bafd-ef3647e3b701"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.063290 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.067141 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "75b951c6-37fc-4757-bafd-ef3647e3b701" (UID: "75b951c6-37fc-4757-bafd-ef3647e3b701"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.068426 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "75b951c6-37fc-4757-bafd-ef3647e3b701" (UID: "75b951c6-37fc-4757-bafd-ef3647e3b701"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.071635 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-config" (OuterVolumeSpecName: "config") pod "75b951c6-37fc-4757-bafd-ef3647e3b701" (UID: "75b951c6-37fc-4757-bafd-ef3647e3b701"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.074865 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "75b951c6-37fc-4757-bafd-ef3647e3b701" (UID: "75b951c6-37fc-4757-bafd-ef3647e3b701"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.127387 4808 scope.go:117] "RemoveContainer" containerID="5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad" Feb 17 16:14:21 crc kubenswrapper[4808]: E0217 16:14:21.130655 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad\": container with ID starting with 5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad not found: ID does not exist" containerID="5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.130687 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad"} err="failed to get container status \"5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad\": rpc error: code = NotFound desc = could not find container \"5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad\": container with ID starting with 5aa14312c0a8d458b64e8098392b9450553a2c278c532aea42aac37dc71148ad not found: ID does not exist" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.130708 4808 scope.go:117] "RemoveContainer" containerID="6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109" Feb 17 16:14:21 crc kubenswrapper[4808]: E0217 16:14:21.141829 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109\": container with ID starting with 6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109 not found: ID does not exist" containerID="6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.141863 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109"} err="failed to get container status \"6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109\": rpc error: code = NotFound desc = could not find container \"6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109\": container with ID starting with 6c36b7f72b37c3fb336e2a5f15220b8f1aec757f894754e35bf7cd4461ad3109 not found: ID does not exist" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.165466 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.165496 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.165507 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.165517 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/75b951c6-37fc-4757-bafd-ef3647e3b701-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.187431 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-wdrmd"] Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.293475 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-5dcwb"] Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.307730 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-5dcwb"] Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.575315 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.707023 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-config\") pod \"4cdfa661-fa28-48be-b416-f2e69927fc9b\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.707162 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-swift-storage-0\") pod \"4cdfa661-fa28-48be-b416-f2e69927fc9b\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.707264 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-svc\") pod \"4cdfa661-fa28-48be-b416-f2e69927fc9b\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.707321 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-sb\") pod \"4cdfa661-fa28-48be-b416-f2e69927fc9b\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.707461 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4mdl\" (UniqueName: \"kubernetes.io/projected/4cdfa661-fa28-48be-b416-f2e69927fc9b-kube-api-access-b4mdl\") pod \"4cdfa661-fa28-48be-b416-f2e69927fc9b\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.707554 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-nb\") pod \"4cdfa661-fa28-48be-b416-f2e69927fc9b\" (UID: \"4cdfa661-fa28-48be-b416-f2e69927fc9b\") " Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.714737 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cdfa661-fa28-48be-b416-f2e69927fc9b-kube-api-access-b4mdl" (OuterVolumeSpecName: "kube-api-access-b4mdl") pod "4cdfa661-fa28-48be-b416-f2e69927fc9b" (UID: "4cdfa661-fa28-48be-b416-f2e69927fc9b"). InnerVolumeSpecName "kube-api-access-b4mdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.741124 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4cdfa661-fa28-48be-b416-f2e69927fc9b" (UID: "4cdfa661-fa28-48be-b416-f2e69927fc9b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.741770 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-config" (OuterVolumeSpecName: "config") pod "4cdfa661-fa28-48be-b416-f2e69927fc9b" (UID: "4cdfa661-fa28-48be-b416-f2e69927fc9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.750164 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4cdfa661-fa28-48be-b416-f2e69927fc9b" (UID: "4cdfa661-fa28-48be-b416-f2e69927fc9b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.753056 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4cdfa661-fa28-48be-b416-f2e69927fc9b" (UID: "4cdfa661-fa28-48be-b416-f2e69927fc9b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.765028 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4cdfa661-fa28-48be-b416-f2e69927fc9b" (UID: "4cdfa661-fa28-48be-b416-f2e69927fc9b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.814003 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.814038 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.814048 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.814057 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4mdl\" (UniqueName: \"kubernetes.io/projected/4cdfa661-fa28-48be-b416-f2e69927fc9b-kube-api-access-b4mdl\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.814066 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.814076 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdfa661-fa28-48be-b416-f2e69927fc9b-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.975745 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jskwv" event={"ID":"436b0400-6c82-450b-9505-61bf124b5db5","Type":"ContainerStarted","Data":"f426da7c0095388c504bdd496cb29b45871594e3a52a02106d296d950a35b8b0"} Feb 17 16:14:21 crc kubenswrapper[4808]: I0217 16:14:21.984530 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rwld8" event={"ID":"5bf4d932-664a-46c6-bec5-f2b70950c824","Type":"ContainerStarted","Data":"9ba656f842dfb00605cd2712c9679dadbf966fdee137e5405e4ec802b02357c9"} Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.014870 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d52vg" event={"ID":"b7820c3c-fe38-46dd-906a-498a579d0805","Type":"ContainerStarted","Data":"5b531905add091d4dfe9c3b871669f1b4764b98e78ffc02ea10bcfde5b754358"} Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.016752 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" event={"ID":"4cdfa661-fa28-48be-b416-f2e69927fc9b","Type":"ContainerDied","Data":"d136669bdec3d3e1777e3899ee2de7762492e7209be6f5909cc9b03217da4323"} Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.016784 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-kpwh4" Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.016785 4808 scope.go:117] "RemoveContainer" containerID="29684d96c1943280e84c76de58aee0550b74d290c562f5eaa5511b6310aa658b" Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.020745 4808 generic.go:334] "Generic (PLEG): container finished" podID="ac763412-39e7-40d0-892a-57ac801af2bb" containerID="3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0" exitCode=0 Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.020796 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" event={"ID":"ac763412-39e7-40d0-892a-57ac801af2bb","Type":"ContainerDied","Data":"3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0"} Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.020814 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" event={"ID":"ac763412-39e7-40d0-892a-57ac801af2bb","Type":"ContainerStarted","Data":"027ce35e95410cc92a867a6b938a45485c623b5bfa8d8827b979b970dbe86f22"} Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.021953 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-wdrmd" event={"ID":"2ec52dbb-ca2f-4013-8536-972042607240","Type":"ContainerStarted","Data":"e334d06468b3a37f46d5f6db68268b3881996656b8f3df2be0b3c006d2589a72"} Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.026208 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p2fwj" event={"ID":"4e39a33f-5d00-4171-bf63-6b12226901d3","Type":"ContainerStarted","Data":"256eec0493e7fac44365f09c9ecea2db586554f077823fc95da099751524686d"} Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.058675 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-jskwv" podStartSLOduration=3.05860718 podStartE2EDuration="3.05860718s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:21.997822715 +0000 UTC m=+1225.514181798" watchObservedRunningTime="2026-02-17 16:14:22.05860718 +0000 UTC m=+1225.574966253" Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.140478 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-p2fwj" podStartSLOduration=3.140456437 podStartE2EDuration="3.140456437s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:22.085417456 +0000 UTC m=+1225.601776529" watchObservedRunningTime="2026-02-17 16:14:22.140456437 +0000 UTC m=+1225.656815510" Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.202690 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kpwh4"] Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.224799 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-kpwh4"] Feb 17 16:14:22 crc kubenswrapper[4808]: I0217 16:14:22.238058 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:14:23 crc kubenswrapper[4808]: I0217 16:14:23.050677 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" event={"ID":"ac763412-39e7-40d0-892a-57ac801af2bb","Type":"ContainerStarted","Data":"efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9"} Feb 17 16:14:23 crc kubenswrapper[4808]: I0217 16:14:23.051150 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:23 crc kubenswrapper[4808]: I0217 16:14:23.166833 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cdfa661-fa28-48be-b416-f2e69927fc9b" path="/var/lib/kubelet/pods/4cdfa661-fa28-48be-b416-f2e69927fc9b/volumes" Feb 17 16:14:23 crc kubenswrapper[4808]: I0217 16:14:23.167452 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b951c6-37fc-4757-bafd-ef3647e3b701" path="/var/lib/kubelet/pods/75b951c6-37fc-4757-bafd-ef3647e3b701/volumes" Feb 17 16:14:25 crc kubenswrapper[4808]: I0217 16:14:25.094038 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:26 crc kubenswrapper[4808]: I0217 16:14:26.118417 4808 generic.go:334] "Generic (PLEG): container finished" podID="4e39a33f-5d00-4171-bf63-6b12226901d3" containerID="256eec0493e7fac44365f09c9ecea2db586554f077823fc95da099751524686d" exitCode=0 Feb 17 16:14:26 crc kubenswrapper[4808]: I0217 16:14:26.118493 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p2fwj" event={"ID":"4e39a33f-5d00-4171-bf63-6b12226901d3","Type":"ContainerDied","Data":"256eec0493e7fac44365f09c9ecea2db586554f077823fc95da099751524686d"} Feb 17 16:14:26 crc kubenswrapper[4808]: I0217 16:14:26.151041 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" podStartSLOduration=7.150745027 podStartE2EDuration="7.150745027s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:23.074000353 +0000 UTC m=+1226.590359426" watchObservedRunningTime="2026-02-17 16:14:26.150745027 +0000 UTC m=+1229.667104100" Feb 17 16:14:26 crc kubenswrapper[4808]: E0217 16:14:26.235004 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-conmon-3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:14:30 crc kubenswrapper[4808]: I0217 16:14:30.223537 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:14:30 crc kubenswrapper[4808]: I0217 16:14:30.284030 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pq8qq"] Feb 17 16:14:30 crc kubenswrapper[4808]: I0217 16:14:30.285015 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-pq8qq" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="dnsmasq-dns" containerID="cri-o://5bbec6100cf7c3218bd24bc7371072ff178631d539a209a85ec99f4282aadb9a" gracePeriod=10 Feb 17 16:14:31 crc kubenswrapper[4808]: I0217 16:14:31.172759 4808 generic.go:334] "Generic (PLEG): container finished" podID="317e56c8-5f01-4313-a632-12ccaccf9442" containerID="5bbec6100cf7c3218bd24bc7371072ff178631d539a209a85ec99f4282aadb9a" exitCode=0 Feb 17 16:14:31 crc kubenswrapper[4808]: I0217 16:14:31.172798 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pq8qq" event={"ID":"317e56c8-5f01-4313-a632-12ccaccf9442","Type":"ContainerDied","Data":"5bbec6100cf7c3218bd24bc7371072ff178631d539a209a85ec99f4282aadb9a"} Feb 17 16:14:32 crc kubenswrapper[4808]: I0217 16:14:32.184102 4808 generic.go:334] "Generic (PLEG): container finished" podID="e4002815-8dd4-4668-bea7-0d54bdaa4dd6" containerID="be39fd3404d415b22eff1029ee90e816412441ea7651c949f01bcda15108e232" exitCode=0 Feb 17 16:14:32 crc kubenswrapper[4808]: I0217 16:14:32.184169 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4mdzt" event={"ID":"e4002815-8dd4-4668-bea7-0d54bdaa4dd6","Type":"ContainerDied","Data":"be39fd3404d415b22eff1029ee90e816412441ea7651c949f01bcda15108e232"} Feb 17 16:14:32 crc kubenswrapper[4808]: I0217 16:14:32.996917 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-pq8qq" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.386803 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.512418 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-combined-ca-bundle\") pod \"4e39a33f-5d00-4171-bf63-6b12226901d3\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.512592 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nklnb\" (UniqueName: \"kubernetes.io/projected/4e39a33f-5d00-4171-bf63-6b12226901d3-kube-api-access-nklnb\") pod \"4e39a33f-5d00-4171-bf63-6b12226901d3\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.512624 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-credential-keys\") pod \"4e39a33f-5d00-4171-bf63-6b12226901d3\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.512812 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-scripts\") pod \"4e39a33f-5d00-4171-bf63-6b12226901d3\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.512848 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-fernet-keys\") pod \"4e39a33f-5d00-4171-bf63-6b12226901d3\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.512874 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-config-data\") pod \"4e39a33f-5d00-4171-bf63-6b12226901d3\" (UID: \"4e39a33f-5d00-4171-bf63-6b12226901d3\") " Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.518405 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4e39a33f-5d00-4171-bf63-6b12226901d3" (UID: "4e39a33f-5d00-4171-bf63-6b12226901d3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.518453 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-scripts" (OuterVolumeSpecName: "scripts") pod "4e39a33f-5d00-4171-bf63-6b12226901d3" (UID: "4e39a33f-5d00-4171-bf63-6b12226901d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.518464 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e39a33f-5d00-4171-bf63-6b12226901d3-kube-api-access-nklnb" (OuterVolumeSpecName: "kube-api-access-nklnb") pod "4e39a33f-5d00-4171-bf63-6b12226901d3" (UID: "4e39a33f-5d00-4171-bf63-6b12226901d3"). InnerVolumeSpecName "kube-api-access-nklnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.520289 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4e39a33f-5d00-4171-bf63-6b12226901d3" (UID: "4e39a33f-5d00-4171-bf63-6b12226901d3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.538784 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-config-data" (OuterVolumeSpecName: "config-data") pod "4e39a33f-5d00-4171-bf63-6b12226901d3" (UID: "4e39a33f-5d00-4171-bf63-6b12226901d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.541888 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e39a33f-5d00-4171-bf63-6b12226901d3" (UID: "4e39a33f-5d00-4171-bf63-6b12226901d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.614521 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.614557 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nklnb\" (UniqueName: \"kubernetes.io/projected/4e39a33f-5d00-4171-bf63-6b12226901d3-kube-api-access-nklnb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.614587 4808 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.614599 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.614611 4808 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:33 crc kubenswrapper[4808]: I0217 16:14:33.614620 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e39a33f-5d00-4171-bf63-6b12226901d3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.209806 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p2fwj" event={"ID":"4e39a33f-5d00-4171-bf63-6b12226901d3","Type":"ContainerDied","Data":"17116d89192a8613360b83b9abc0d23bc6d3cc17099f32067b22d7c7d3c6494e"} Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.209850 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p2fwj" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.210220 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17116d89192a8613360b83b9abc0d23bc6d3cc17099f32067b22d7c7d3c6494e" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.470228 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-p2fwj"] Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.481682 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-p2fwj"] Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.494822 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-67f4b"] Feb 17 16:14:34 crc kubenswrapper[4808]: E0217 16:14:34.495183 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerName="init" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.495199 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerName="init" Feb 17 16:14:34 crc kubenswrapper[4808]: E0217 16:14:34.495212 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cdfa661-fa28-48be-b416-f2e69927fc9b" containerName="init" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.495217 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cdfa661-fa28-48be-b416-f2e69927fc9b" containerName="init" Feb 17 16:14:34 crc kubenswrapper[4808]: E0217 16:14:34.495233 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e39a33f-5d00-4171-bf63-6b12226901d3" containerName="keystone-bootstrap" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.495240 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e39a33f-5d00-4171-bf63-6b12226901d3" containerName="keystone-bootstrap" Feb 17 16:14:34 crc kubenswrapper[4808]: E0217 16:14:34.495251 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerName="dnsmasq-dns" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.495257 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerName="dnsmasq-dns" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.495422 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cdfa661-fa28-48be-b416-f2e69927fc9b" containerName="init" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.495444 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b951c6-37fc-4757-bafd-ef3647e3b701" containerName="dnsmasq-dns" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.495455 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e39a33f-5d00-4171-bf63-6b12226901d3" containerName="keystone-bootstrap" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.496115 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.501125 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.501271 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.501278 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6x2tm" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.501520 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.507699 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-67f4b"] Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.639603 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-combined-ca-bundle\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.639674 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h27j8\" (UniqueName: \"kubernetes.io/projected/bb977bed-804c-4e4c-8d35-5562015024f3-kube-api-access-h27j8\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.639727 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-credential-keys\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.639746 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-config-data\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.639834 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-scripts\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.639938 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-fernet-keys\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.741562 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-fernet-keys\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.741648 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-combined-ca-bundle\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.741677 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h27j8\" (UniqueName: \"kubernetes.io/projected/bb977bed-804c-4e4c-8d35-5562015024f3-kube-api-access-h27j8\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.741711 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-credential-keys\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.741729 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-config-data\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.741767 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-scripts\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.749451 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-scripts\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.760269 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-fernet-keys\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.760326 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-config-data\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.760654 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-combined-ca-bundle\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.762037 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-credential-keys\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.762959 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h27j8\" (UniqueName: \"kubernetes.io/projected/bb977bed-804c-4e4c-8d35-5562015024f3-kube-api-access-h27j8\") pod \"keystone-bootstrap-67f4b\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:34 crc kubenswrapper[4808]: I0217 16:14:34.832591 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:14:35 crc kubenswrapper[4808]: I0217 16:14:35.094007 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:35 crc kubenswrapper[4808]: I0217 16:14:35.102782 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:35 crc kubenswrapper[4808]: I0217 16:14:35.165184 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e39a33f-5d00-4171-bf63-6b12226901d3" path="/var/lib/kubelet/pods/4e39a33f-5d00-4171-bf63-6b12226901d3/volumes" Feb 17 16:14:35 crc kubenswrapper[4808]: I0217 16:14:35.224163 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 17 16:14:36 crc kubenswrapper[4808]: E0217 16:14:36.489936 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-conmon-3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:14:40 crc kubenswrapper[4808]: I0217 16:14:40.266481 4808 generic.go:334] "Generic (PLEG): container finished" podID="436b0400-6c82-450b-9505-61bf124b5db5" containerID="f426da7c0095388c504bdd496cb29b45871594e3a52a02106d296d950a35b8b0" exitCode=0 Feb 17 16:14:40 crc kubenswrapper[4808]: I0217 16:14:40.266606 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jskwv" event={"ID":"436b0400-6c82-450b-9505-61bf124b5db5","Type":"ContainerDied","Data":"f426da7c0095388c504bdd496cb29b45871594e3a52a02106d296d950a35b8b0"} Feb 17 16:14:42 crc kubenswrapper[4808]: I0217 16:14:42.996863 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-pq8qq" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.269467 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.312277 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jskwv" event={"ID":"436b0400-6c82-450b-9505-61bf124b5db5","Type":"ContainerDied","Data":"5717dd2ef8af55d59bb6a6c87c756928ce372bb105a7380fa60e88c0fb60d552"} Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.312321 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jskwv" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.312334 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5717dd2ef8af55d59bb6a6c87c756928ce372bb105a7380fa60e88c0fb60d552" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.320452 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-config\") pod \"436b0400-6c82-450b-9505-61bf124b5db5\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.320514 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle\") pod \"436b0400-6c82-450b-9505-61bf124b5db5\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.320833 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zvwj\" (UniqueName: \"kubernetes.io/projected/436b0400-6c82-450b-9505-61bf124b5db5-kube-api-access-8zvwj\") pod \"436b0400-6c82-450b-9505-61bf124b5db5\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.328779 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436b0400-6c82-450b-9505-61bf124b5db5-kube-api-access-8zvwj" (OuterVolumeSpecName: "kube-api-access-8zvwj") pod "436b0400-6c82-450b-9505-61bf124b5db5" (UID: "436b0400-6c82-450b-9505-61bf124b5db5"). InnerVolumeSpecName "kube-api-access-8zvwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:44 crc kubenswrapper[4808]: E0217 16:14:44.347014 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle podName:436b0400-6c82-450b-9505-61bf124b5db5 nodeName:}" failed. No retries permitted until 2026-02-17 16:14:44.846898456 +0000 UTC m=+1248.363257539 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle") pod "436b0400-6c82-450b-9505-61bf124b5db5" (UID: "436b0400-6c82-450b-9505-61bf124b5db5") : error deleting /var/lib/kubelet/pods/436b0400-6c82-450b-9505-61bf124b5db5/volume-subpaths: remove /var/lib/kubelet/pods/436b0400-6c82-450b-9505-61bf124b5db5/volume-subpaths: no such file or directory Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.349196 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-config" (OuterVolumeSpecName: "config") pod "436b0400-6c82-450b-9505-61bf124b5db5" (UID: "436b0400-6c82-450b-9505-61bf124b5db5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.422655 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zvwj\" (UniqueName: \"kubernetes.io/projected/436b0400-6c82-450b-9505-61bf124b5db5-kube-api-access-8zvwj\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.422685 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.840320 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4mdzt" Feb 17 16:14:44 crc kubenswrapper[4808]: E0217 16:14:44.843755 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 17 16:14:44 crc kubenswrapper[4808]: E0217 16:14:44.843947 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zvc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-rwld8_openstack(5bf4d932-664a-46c6-bec5-f2b70950c824): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:44 crc kubenswrapper[4808]: E0217 16:14:44.845748 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-rwld8" podUID="5bf4d932-664a-46c6-bec5-f2b70950c824" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.931494 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle\") pod \"436b0400-6c82-450b-9505-61bf124b5db5\" (UID: \"436b0400-6c82-450b-9505-61bf124b5db5\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.931566 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-combined-ca-bundle\") pod \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.931687 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-config-data\") pod \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.931729 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb486\" (UniqueName: \"kubernetes.io/projected/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-kube-api-access-rb486\") pod \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.931918 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-db-sync-config-data\") pod \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\" (UID: \"e4002815-8dd4-4668-bea7-0d54bdaa4dd6\") " Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.935560 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-kube-api-access-rb486" (OuterVolumeSpecName: "kube-api-access-rb486") pod "e4002815-8dd4-4668-bea7-0d54bdaa4dd6" (UID: "e4002815-8dd4-4668-bea7-0d54bdaa4dd6"). InnerVolumeSpecName "kube-api-access-rb486". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.936141 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "436b0400-6c82-450b-9505-61bf124b5db5" (UID: "436b0400-6c82-450b-9505-61bf124b5db5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.936610 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e4002815-8dd4-4668-bea7-0d54bdaa4dd6" (UID: "e4002815-8dd4-4668-bea7-0d54bdaa4dd6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.956268 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4002815-8dd4-4668-bea7-0d54bdaa4dd6" (UID: "e4002815-8dd4-4668-bea7-0d54bdaa4dd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:44 crc kubenswrapper[4808]: I0217 16:14:44.978540 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-config-data" (OuterVolumeSpecName: "config-data") pod "e4002815-8dd4-4668-bea7-0d54bdaa4dd6" (UID: "e4002815-8dd4-4668-bea7-0d54bdaa4dd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.033788 4808 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.033834 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/436b0400-6c82-450b-9505-61bf124b5db5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.033847 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.033859 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.033871 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb486\" (UniqueName: \"kubernetes.io/projected/e4002815-8dd4-4668-bea7-0d54bdaa4dd6-kube-api-access-rb486\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.324459 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-4mdzt" event={"ID":"e4002815-8dd4-4668-bea7-0d54bdaa4dd6","Type":"ContainerDied","Data":"e5bfc747bb74b14a5184eb3f8c16443aca59a2667d60646ea7965a405418e0b0"} Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.324481 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-4mdzt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.324497 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5bfc747bb74b14a5184eb3f8c16443aca59a2667d60646ea7965a405418e0b0" Feb 17 16:14:45 crc kubenswrapper[4808]: E0217 16:14:45.326741 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-rwld8" podUID="5bf4d932-664a-46c6-bec5-f2b70950c824" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.526307 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-kcq78"] Feb 17 16:14:45 crc kubenswrapper[4808]: E0217 16:14:45.527187 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="436b0400-6c82-450b-9505-61bf124b5db5" containerName="neutron-db-sync" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.527215 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="436b0400-6c82-450b-9505-61bf124b5db5" containerName="neutron-db-sync" Feb 17 16:14:45 crc kubenswrapper[4808]: E0217 16:14:45.527240 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4002815-8dd4-4668-bea7-0d54bdaa4dd6" containerName="glance-db-sync" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.527249 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4002815-8dd4-4668-bea7-0d54bdaa4dd6" containerName="glance-db-sync" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.527531 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="436b0400-6c82-450b-9505-61bf124b5db5" containerName="neutron-db-sync" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.527595 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4002815-8dd4-4668-bea7-0d54bdaa4dd6" containerName="glance-db-sync" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.529395 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.555838 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-kcq78"] Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.651512 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.651594 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.651620 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr877\" (UniqueName: \"kubernetes.io/projected/a79be637-3b6e-4ccf-8bbe-95b1baf64444-kube-api-access-jr877\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.651838 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.652105 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-config\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.652248 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.683259 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c8b8554dd-86wnt"] Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.684988 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.690777 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-89rvs" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.691741 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.692042 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.694241 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.699703 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c8b8554dd-86wnt"] Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754095 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-config\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754142 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-httpd-config\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754171 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-config\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754206 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-ovndb-tls-certs\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754226 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754260 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754290 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnm4z\" (UniqueName: \"kubernetes.io/projected/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-kube-api-access-wnm4z\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754320 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754341 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr877\" (UniqueName: \"kubernetes.io/projected/a79be637-3b6e-4ccf-8bbe-95b1baf64444-kube-api-access-jr877\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754383 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-combined-ca-bundle\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.754413 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.755364 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-sb\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.755792 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-nb\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.758878 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-config\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.759048 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-swift-storage-0\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.759884 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-svc\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.792151 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr877\" (UniqueName: \"kubernetes.io/projected/a79be637-3b6e-4ccf-8bbe-95b1baf64444-kube-api-access-jr877\") pod \"dnsmasq-dns-7d88d7b95f-kcq78\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.856285 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-config\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.856347 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-ovndb-tls-certs\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.856556 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnm4z\" (UniqueName: \"kubernetes.io/projected/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-kube-api-access-wnm4z\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.856840 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-combined-ca-bundle\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.857157 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-httpd-config\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.860350 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-httpd-config\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.860442 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-config\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.862309 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-ovndb-tls-certs\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.862529 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-combined-ca-bundle\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.872614 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.877142 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnm4z\" (UniqueName: \"kubernetes.io/projected/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-kube-api-access-wnm4z\") pod \"neutron-5c8b8554dd-86wnt\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:45 crc kubenswrapper[4808]: I0217 16:14:45.982604 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.044153 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.060709 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-config\") pod \"317e56c8-5f01-4313-a632-12ccaccf9442\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.060750 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l9h5\" (UniqueName: \"kubernetes.io/projected/317e56c8-5f01-4313-a632-12ccaccf9442-kube-api-access-2l9h5\") pod \"317e56c8-5f01-4313-a632-12ccaccf9442\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.060969 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-dns-svc\") pod \"317e56c8-5f01-4313-a632-12ccaccf9442\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.061036 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-sb\") pod \"317e56c8-5f01-4313-a632-12ccaccf9442\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.061056 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-nb\") pod \"317e56c8-5f01-4313-a632-12ccaccf9442\" (UID: \"317e56c8-5f01-4313-a632-12ccaccf9442\") " Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.065783 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/317e56c8-5f01-4313-a632-12ccaccf9442-kube-api-access-2l9h5" (OuterVolumeSpecName: "kube-api-access-2l9h5") pod "317e56c8-5f01-4313-a632-12ccaccf9442" (UID: "317e56c8-5f01-4313-a632-12ccaccf9442"). InnerVolumeSpecName "kube-api-access-2l9h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.165845 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l9h5\" (UniqueName: \"kubernetes.io/projected/317e56c8-5f01-4313-a632-12ccaccf9442-kube-api-access-2l9h5\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.189030 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "317e56c8-5f01-4313-a632-12ccaccf9442" (UID: "317e56c8-5f01-4313-a632-12ccaccf9442"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.195173 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-config" (OuterVolumeSpecName: "config") pod "317e56c8-5f01-4313-a632-12ccaccf9442" (UID: "317e56c8-5f01-4313-a632-12ccaccf9442"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.234773 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-kcq78"] Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.236825 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "317e56c8-5f01-4313-a632-12ccaccf9442" (UID: "317e56c8-5f01-4313-a632-12ccaccf9442"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.247128 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "317e56c8-5f01-4313-a632-12ccaccf9442" (UID: "317e56c8-5f01-4313-a632-12ccaccf9442"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.267258 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.267290 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.267300 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.267310 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/317e56c8-5f01-4313-a632-12ccaccf9442-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.282196 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7t4g9"] Feb 17 16:14:46 crc kubenswrapper[4808]: E0217 16:14:46.283399 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="init" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.283427 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="init" Feb 17 16:14:46 crc kubenswrapper[4808]: E0217 16:14:46.283471 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="dnsmasq-dns" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.283479 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="dnsmasq-dns" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.283697 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="dnsmasq-dns" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.308500 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.367715 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7t4g9"] Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.378280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.378314 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.378361 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpz7f\" (UniqueName: \"kubernetes.io/projected/abaeb0d0-670e-4a6d-a583-b4885236c73d-kube-api-access-vpz7f\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.378493 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.378510 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-config\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.378525 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.383247 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pq8qq" event={"ID":"317e56c8-5f01-4313-a632-12ccaccf9442","Type":"ContainerDied","Data":"ddfff32a5e606c9bd26b149ee55b24df69316a56d9a9ba2c7680c271a80e072c"} Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.383320 4808 scope.go:117] "RemoveContainer" containerID="5bbec6100cf7c3218bd24bc7371072ff178631d539a209a85ec99f4282aadb9a" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.383568 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-pq8qq" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.453408 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pq8qq"] Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.461905 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pq8qq"] Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.480866 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.480921 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.480978 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpz7f\" (UniqueName: \"kubernetes.io/projected/abaeb0d0-670e-4a6d-a583-b4885236c73d-kube-api-access-vpz7f\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.481129 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.481160 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-config\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.481182 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.481906 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.483259 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.484159 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-config\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.484940 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.485309 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.503517 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpz7f\" (UniqueName: \"kubernetes.io/projected/abaeb0d0-670e-4a6d-a583-b4885236c73d-kube-api-access-vpz7f\") pod \"dnsmasq-dns-55f844cf75-7t4g9\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: I0217 16:14:46.679556 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:14:46 crc kubenswrapper[4808]: E0217 16:14:46.745745 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-conmon-3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.106872 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.108662 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.123719 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.123900 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.124005 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xhb8t" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.130401 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.164979 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" path="/var/lib/kubelet/pods/317e56c8-5f01-4313-a632-12ccaccf9442/volumes" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.195185 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-logs\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.195244 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.195306 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.195374 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-config-data\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.195438 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.196194 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-scripts\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.196255 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkqj5\" (UniqueName: \"kubernetes.io/projected/03b7a5d2-f785-4f3f-962d-b82b7d922dde-kube-api-access-mkqj5\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.298313 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.298443 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-scripts\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.298473 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkqj5\" (UniqueName: \"kubernetes.io/projected/03b7a5d2-f785-4f3f-962d-b82b7d922dde-kube-api-access-mkqj5\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.298522 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-logs\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.298544 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.298610 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.298692 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-config-data\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.300247 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.301175 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-logs\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.304044 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.304109 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/793125420e976eb43638bc1f8c10c1dbf19200ea40f241dea1aa3deff96042e8/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.304907 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-scripts\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.306834 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.308557 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-config-data\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.331902 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkqj5\" (UniqueName: \"kubernetes.io/projected/03b7a5d2-f785-4f3f-962d-b82b7d922dde-kube-api-access-mkqj5\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.357633 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.421366 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.425646 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.429535 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.439167 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.455279 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.503528 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.503636 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.503788 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fc4x\" (UniqueName: \"kubernetes.io/projected/f547a16d-87f8-4ee7-96a5-c4039bfdb453-kube-api-access-7fc4x\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.503845 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-logs\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.504072 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.504112 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.504147 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.607203 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fc4x\" (UniqueName: \"kubernetes.io/projected/f547a16d-87f8-4ee7-96a5-c4039bfdb453-kube-api-access-7fc4x\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.607374 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-logs\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.607468 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.607528 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.607551 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.607644 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.607675 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.608061 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-logs\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.608113 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.611402 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.612807 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.612885 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.614095 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.614124 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/babb0a58e49abb7abbb526a723d7265132519584485959e000cf4b8b02c96a84/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.640342 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fc4x\" (UniqueName: \"kubernetes.io/projected/f547a16d-87f8-4ee7-96a5-c4039bfdb453-kube-api-access-7fc4x\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.646436 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.751766 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:14:47 crc kubenswrapper[4808]: I0217 16:14:47.997606 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-pq8qq" podUID="317e56c8-5f01-4313-a632-12ccaccf9442" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Feb 17 16:14:48 crc kubenswrapper[4808]: E0217 16:14:48.020087 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 17 16:14:48 crc kubenswrapper[4808]: E0217 16:14:48.020281 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9mc46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-jcqjf_openstack(d0cc3be3-7aa7-4384-97ed-1ec7bf75f026): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:48 crc kubenswrapper[4808]: E0217 16:14:48.021479 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-jcqjf" podUID="d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" Feb 17 16:14:48 crc kubenswrapper[4808]: E0217 16:14:48.408759 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-jcqjf" podUID="d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" Feb 17 16:14:50 crc kubenswrapper[4808]: I0217 16:14:50.984134 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.060309 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.675864 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6576669595-nvtln"] Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.687812 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.687978 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-internal-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.688049 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-config\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.688069 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-ovndb-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.688104 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-combined-ca-bundle\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.688122 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfzgz\" (UniqueName: \"kubernetes.io/projected/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-kube-api-access-kfzgz\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.688164 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-httpd-config\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.688236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-public-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.688348 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6576669595-nvtln"] Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.710347 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.710764 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.790476 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-public-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.790754 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-internal-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.790826 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-config\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.790864 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-ovndb-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.790931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-combined-ca-bundle\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.790961 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfzgz\" (UniqueName: \"kubernetes.io/projected/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-kube-api-access-kfzgz\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.791041 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-httpd-config\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.795197 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-public-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.795476 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-ovndb-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.795641 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-config\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.795831 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-internal-tls-certs\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.797250 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-combined-ca-bundle\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.814430 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-httpd-config\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:51 crc kubenswrapper[4808]: I0217 16:14:51.815249 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfzgz\" (UniqueName: \"kubernetes.io/projected/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-kube-api-access-kfzgz\") pod \"neutron-6576669595-nvtln\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:52 crc kubenswrapper[4808]: I0217 16:14:52.038070 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6576669595-nvtln" Feb 17 16:14:53 crc kubenswrapper[4808]: I0217 16:14:53.856011 4808 scope.go:117] "RemoveContainer" containerID="05efd9fb2a30652e1a674ecb739d46dca429eecdc2a90da4de03961953c36078" Feb 17 16:14:54 crc kubenswrapper[4808]: I0217 16:14:54.323825 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-67f4b"] Feb 17 16:14:54 crc kubenswrapper[4808]: W0217 16:14:54.516091 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb977bed_804c_4e4c_8d35_5562015024f3.slice/crio-c81162eb89cbecee97cfac1cc5229cbf6b84ca62ed280abed73ac2d3607e8880 WatchSource:0}: Error finding container c81162eb89cbecee97cfac1cc5229cbf6b84ca62ed280abed73ac2d3607e8880: Status 404 returned error can't find the container with id c81162eb89cbecee97cfac1cc5229cbf6b84ca62ed280abed73ac2d3607e8880 Feb 17 16:14:54 crc kubenswrapper[4808]: E0217 16:14:54.527690 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 17 16:14:54 crc kubenswrapper[4808]: E0217 16:14:54.528010 4808 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 17 16:14:54 crc kubenswrapper[4808]: E0217 16:14:54.528478 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jmms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-wdrmd_openstack(2ec52dbb-ca2f-4013-8536-972042607240): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 17 16:14:54 crc kubenswrapper[4808]: E0217 16:14:54.529691 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-wdrmd" podUID="2ec52dbb-ca2f-4013-8536-972042607240" Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.169795 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7t4g9"] Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.172045 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c8b8554dd-86wnt"] Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.262349 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.323044 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-kcq78"] Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.391970 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6576669595-nvtln"] Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.554255 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerStarted","Data":"dab1c654217acba93cbe85ef948ea50d4d0076687aeb53ea5db8956f9dc60a1a"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.560781 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" event={"ID":"abaeb0d0-670e-4a6d-a583-b4885236c73d","Type":"ContainerStarted","Data":"673b376ab9a6f91954598ab4a63c75d818d8ff65e3bf87016ce8c6e162ed2846"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.581166 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6576669595-nvtln" event={"ID":"dd20b2ca-153a-4f21-9c41-4f00bdc82b56","Type":"ContainerStarted","Data":"6a095cda0c57e7c83e37162d0a00993ab0fc7d2ed318b1cd5b24f7f8e6f8ed0d"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.593443 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b8554dd-86wnt" event={"ID":"b4b8e73f-b7b0-4580-8e0f-44eef84624e4","Type":"ContainerStarted","Data":"37ecb8a325939b5e585da0c83aac7cd196aa16f8c7e46e0941abecb0dea07a08"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.594654 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"03b7a5d2-f785-4f3f-962d-b82b7d922dde","Type":"ContainerStarted","Data":"7582431cc96f656a76c273158d6a6121cb9dd22056c9bc46740b2c3ec436de2b"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.597145 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-67f4b" event={"ID":"bb977bed-804c-4e4c-8d35-5562015024f3","Type":"ContainerStarted","Data":"f8847c4c332a78fa4f9cfb197b1e182c16bad161468b9956b43f0c638512254c"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.597173 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-67f4b" event={"ID":"bb977bed-804c-4e4c-8d35-5562015024f3","Type":"ContainerStarted","Data":"c81162eb89cbecee97cfac1cc5229cbf6b84ca62ed280abed73ac2d3607e8880"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.599179 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" event={"ID":"a79be637-3b6e-4ccf-8bbe-95b1baf64444","Type":"ContainerStarted","Data":"24c9ce81f9e602d6a930f27dc304d5868bca2e20b4aea4429bb4f1c683cfc845"} Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.606853 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d52vg" event={"ID":"b7820c3c-fe38-46dd-906a-498a579d0805","Type":"ContainerStarted","Data":"8d303380763eeeb183dbe5ad17a24b48fb7b4e5af84df78d3904d5c4c2cf91f7"} Feb 17 16:14:55 crc kubenswrapper[4808]: E0217 16:14:55.609279 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-wdrmd" podUID="2ec52dbb-ca2f-4013-8536-972042607240" Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.616013 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-67f4b" podStartSLOduration=21.615995931 podStartE2EDuration="21.615995931s" podCreationTimestamp="2026-02-17 16:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:55.614233604 +0000 UTC m=+1259.130592677" watchObservedRunningTime="2026-02-17 16:14:55.615995931 +0000 UTC m=+1259.132355014" Feb 17 16:14:55 crc kubenswrapper[4808]: I0217 16:14:55.653909 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-d52vg" podStartSLOduration=7.1351817650000005 podStartE2EDuration="36.653892307s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="2026-02-17 16:14:21.010607125 +0000 UTC m=+1224.526966198" lastFinishedPulling="2026-02-17 16:14:50.529317617 +0000 UTC m=+1254.045676740" observedRunningTime="2026-02-17 16:14:55.652875529 +0000 UTC m=+1259.169234602" watchObservedRunningTime="2026-02-17 16:14:55.653892307 +0000 UTC m=+1259.170251380" Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.148219 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.616079 4808 generic.go:334] "Generic (PLEG): container finished" podID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerID="dddcaac247851948b323e115b84153bfcbcb71436b40ee468a0fbbfe54d676ae" exitCode=0 Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.616159 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" event={"ID":"abaeb0d0-670e-4a6d-a583-b4885236c73d","Type":"ContainerDied","Data":"dddcaac247851948b323e115b84153bfcbcb71436b40ee468a0fbbfe54d676ae"} Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.626441 4808 generic.go:334] "Generic (PLEG): container finished" podID="a79be637-3b6e-4ccf-8bbe-95b1baf64444" containerID="bcee8f3f2e22515c4ec2c71a0c369ae17f4dcd41bc80c7856231434378167962" exitCode=0 Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.626502 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" event={"ID":"a79be637-3b6e-4ccf-8bbe-95b1baf64444","Type":"ContainerDied","Data":"bcee8f3f2e22515c4ec2c71a0c369ae17f4dcd41bc80c7856231434378167962"} Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.633307 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6576669595-nvtln" event={"ID":"dd20b2ca-153a-4f21-9c41-4f00bdc82b56","Type":"ContainerStarted","Data":"811f9cc94c4ee217b19fe631254bddba36393da079ca418fd65bacd8378b729d"} Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.643067 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b8554dd-86wnt" event={"ID":"b4b8e73f-b7b0-4580-8e0f-44eef84624e4","Type":"ContainerStarted","Data":"6fb4ffeac0605961472d3b2de8b2dce4344cba69b4920dc698cb1b861244c6eb"} Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.643131 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b8554dd-86wnt" event={"ID":"b4b8e73f-b7b0-4580-8e0f-44eef84624e4","Type":"ContainerStarted","Data":"f3f7fd1ba085d42fb2a1208d784040ea1e2e45a48ec8b1c70c8122235d3614aa"} Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.643149 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.648317 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"03b7a5d2-f785-4f3f-962d-b82b7d922dde","Type":"ContainerStarted","Data":"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716"} Feb 17 16:14:56 crc kubenswrapper[4808]: I0217 16:14:56.734756 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c8b8554dd-86wnt" podStartSLOduration=11.734719771 podStartE2EDuration="11.734719771s" podCreationTimestamp="2026-02-17 16:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:14:56.703949178 +0000 UTC m=+1260.220308251" watchObservedRunningTime="2026-02-17 16:14:56.734719771 +0000 UTC m=+1260.251078844" Feb 17 16:14:57 crc kubenswrapper[4808]: E0217 16:14:57.068000 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-8d4b256de0544b61472bec728b8a9f6596b6505c3ff6baf74b4b74f9988e76dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2917eca2_0431_4bd6_ad96_ab8464cc4fd7.slice/crio-conmon-3e1259ba3d26a0e7de7e3a0ca80bca8985317419bb22e9888ef6fc0a7e83aec7.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.133341 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:57 crc kubenswrapper[4808]: E0217 16:14:57.185151 4808 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8220dc80e343188ccc976cd01bb233632f3de453fd04815105dfdf15196faa6a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8220dc80e343188ccc976cd01bb233632f3de453fd04815105dfdf15196faa6a/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_prometheus-metric-storage-0_2917eca2-0431-4bd6-ad96-ab8464cc4fd7/config-reloader/0.log" to get inode usage: stat /var/log/pods/openstack_prometheus-metric-storage-0_2917eca2-0431-4bd6-ad96-ab8464cc4fd7/config-reloader/0.log: no such file or directory Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.321229 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-nb\") pod \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.321988 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-config\") pod \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.322021 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-swift-storage-0\") pod \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.322046 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-sb\") pod \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.322076 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr877\" (UniqueName: \"kubernetes.io/projected/a79be637-3b6e-4ccf-8bbe-95b1baf64444-kube-api-access-jr877\") pod \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.322164 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-svc\") pod \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\" (UID: \"a79be637-3b6e-4ccf-8bbe-95b1baf64444\") " Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.356372 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79be637-3b6e-4ccf-8bbe-95b1baf64444-kube-api-access-jr877" (OuterVolumeSpecName: "kube-api-access-jr877") pod "a79be637-3b6e-4ccf-8bbe-95b1baf64444" (UID: "a79be637-3b6e-4ccf-8bbe-95b1baf64444"). InnerVolumeSpecName "kube-api-access-jr877". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.357365 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a79be637-3b6e-4ccf-8bbe-95b1baf64444" (UID: "a79be637-3b6e-4ccf-8bbe-95b1baf64444"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.360048 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-config" (OuterVolumeSpecName: "config") pod "a79be637-3b6e-4ccf-8bbe-95b1baf64444" (UID: "a79be637-3b6e-4ccf-8bbe-95b1baf64444"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.370545 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a79be637-3b6e-4ccf-8bbe-95b1baf64444" (UID: "a79be637-3b6e-4ccf-8bbe-95b1baf64444"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.371912 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a79be637-3b6e-4ccf-8bbe-95b1baf64444" (UID: "a79be637-3b6e-4ccf-8bbe-95b1baf64444"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.387338 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a79be637-3b6e-4ccf-8bbe-95b1baf64444" (UID: "a79be637-3b6e-4ccf-8bbe-95b1baf64444"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.427090 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.427140 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.427156 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.427171 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.427183 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr877\" (UniqueName: \"kubernetes.io/projected/a79be637-3b6e-4ccf-8bbe-95b1baf64444-kube-api-access-jr877\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.427196 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a79be637-3b6e-4ccf-8bbe-95b1baf64444-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.656735 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f547a16d-87f8-4ee7-96a5-c4039bfdb453","Type":"ContainerStarted","Data":"c10fc6d6f2a4869db9fa18326dfe2683218bcdc439daca6286604be99d676aab"} Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.658953 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" event={"ID":"a79be637-3b6e-4ccf-8bbe-95b1baf64444","Type":"ContainerDied","Data":"24c9ce81f9e602d6a930f27dc304d5868bca2e20b4aea4429bb4f1c683cfc845"} Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.659007 4808 scope.go:117] "RemoveContainer" containerID="bcee8f3f2e22515c4ec2c71a0c369ae17f4dcd41bc80c7856231434378167962" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.659044 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d88d7b95f-kcq78" Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.758064 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-kcq78"] Feb 17 16:14:57 crc kubenswrapper[4808]: I0217 16:14:57.772371 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d88d7b95f-kcq78"] Feb 17 16:14:59 crc kubenswrapper[4808]: I0217 16:14:59.162808 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79be637-3b6e-4ccf-8bbe-95b1baf64444" path="/var/lib/kubelet/pods/a79be637-3b6e-4ccf-8bbe-95b1baf64444/volumes" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.154853 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh"] Feb 17 16:15:00 crc kubenswrapper[4808]: E0217 16:15:00.155349 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79be637-3b6e-4ccf-8bbe-95b1baf64444" containerName="init" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.155372 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79be637-3b6e-4ccf-8bbe-95b1baf64444" containerName="init" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.155652 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a79be637-3b6e-4ccf-8bbe-95b1baf64444" containerName="init" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.156520 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.161003 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.161026 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.169064 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh"] Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.287121 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg4tp\" (UniqueName: \"kubernetes.io/projected/41f86f53-7772-428e-b916-8624c83de123-kube-api-access-zg4tp\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.287212 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41f86f53-7772-428e-b916-8624c83de123-secret-volume\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.287823 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41f86f53-7772-428e-b916-8624c83de123-config-volume\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.389404 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg4tp\" (UniqueName: \"kubernetes.io/projected/41f86f53-7772-428e-b916-8624c83de123-kube-api-access-zg4tp\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.389490 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41f86f53-7772-428e-b916-8624c83de123-secret-volume\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.389646 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41f86f53-7772-428e-b916-8624c83de123-config-volume\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.390496 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41f86f53-7772-428e-b916-8624c83de123-config-volume\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.395276 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41f86f53-7772-428e-b916-8624c83de123-secret-volume\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.408312 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg4tp\" (UniqueName: \"kubernetes.io/projected/41f86f53-7772-428e-b916-8624c83de123-kube-api-access-zg4tp\") pod \"collect-profiles-29522415-pp7nh\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:00 crc kubenswrapper[4808]: I0217 16:15:00.489716 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:01 crc kubenswrapper[4808]: I0217 16:15:01.710211 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6576669595-nvtln" event={"ID":"dd20b2ca-153a-4f21-9c41-4f00bdc82b56","Type":"ContainerStarted","Data":"fee07854741e5a088b7b1dea17a21007719827fd0ce55cfd2c9c99ff36340d84"} Feb 17 16:15:01 crc kubenswrapper[4808]: I0217 16:15:01.713046 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" event={"ID":"abaeb0d0-670e-4a6d-a583-b4885236c73d","Type":"ContainerStarted","Data":"f93f51535ebc44c66de2583206f5226e2e1eace05189cb4e738809b8081ce7e1"} Feb 17 16:15:03 crc kubenswrapper[4808]: I0217 16:15:03.733668 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f547a16d-87f8-4ee7-96a5-c4039bfdb453","Type":"ContainerStarted","Data":"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2"} Feb 17 16:15:03 crc kubenswrapper[4808]: I0217 16:15:03.736941 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"03b7a5d2-f785-4f3f-962d-b82b7d922dde","Type":"ContainerStarted","Data":"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b"} Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.594929 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh"] Feb 17 16:15:04 crc kubenswrapper[4808]: W0217 16:15:04.610196 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41f86f53_7772_428e_b916_8624c83de123.slice/crio-bbb87748ac53790d547ebe98fbf611fde3c6a82de7d4e177315d64123d64ebf9 WatchSource:0}: Error finding container bbb87748ac53790d547ebe98fbf611fde3c6a82de7d4e177315d64123d64ebf9: Status 404 returned error can't find the container with id bbb87748ac53790d547ebe98fbf611fde3c6a82de7d4e177315d64123d64ebf9 Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.752181 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" event={"ID":"41f86f53-7772-428e-b916-8624c83de123","Type":"ContainerStarted","Data":"bbb87748ac53790d547ebe98fbf611fde3c6a82de7d4e177315d64123d64ebf9"} Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.754593 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerStarted","Data":"dd8761ee926d8071fc41da21713fb32d5f439b5455e53db35d9392155b78adbe"} Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.755942 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rwld8" event={"ID":"5bf4d932-664a-46c6-bec5-f2b70950c824","Type":"ContainerStarted","Data":"d13306e7f7b98912b9cc3cb00da949b55a527efdf00a13d4c28a802941f6067a"} Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.759274 4808 generic.go:334] "Generic (PLEG): container finished" podID="bb977bed-804c-4e4c-8d35-5562015024f3" containerID="f8847c4c332a78fa4f9cfb197b1e182c16bad161468b9956b43f0c638512254c" exitCode=0 Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.759462 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-67f4b" event={"ID":"bb977bed-804c-4e4c-8d35-5562015024f3","Type":"ContainerDied","Data":"f8847c4c332a78fa4f9cfb197b1e182c16bad161468b9956b43f0c638512254c"} Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.759586 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-log" containerID="cri-o://8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716" gracePeriod=30 Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.759751 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-httpd" containerID="cri-o://25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b" gracePeriod=30 Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.759779 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.759853 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6576669595-nvtln" Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.779484 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-rwld8" podStartSLOduration=2.509486281 podStartE2EDuration="45.779467457s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="2026-02-17 16:14:20.946027596 +0000 UTC m=+1224.462386669" lastFinishedPulling="2026-02-17 16:15:04.216008772 +0000 UTC m=+1267.732367845" observedRunningTime="2026-02-17 16:15:04.76881668 +0000 UTC m=+1268.285175753" watchObservedRunningTime="2026-02-17 16:15:04.779467457 +0000 UTC m=+1268.295826530" Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.810906 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6576669595-nvtln" podStartSLOduration=13.810884099 podStartE2EDuration="13.810884099s" podCreationTimestamp="2026-02-17 16:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:04.788107181 +0000 UTC m=+1268.304466254" watchObservedRunningTime="2026-02-17 16:15:04.810884099 +0000 UTC m=+1268.327243172" Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.840425 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" podStartSLOduration=18.840382617 podStartE2EDuration="18.840382617s" podCreationTimestamp="2026-02-17 16:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:04.825061342 +0000 UTC m=+1268.341420435" watchObservedRunningTime="2026-02-17 16:15:04.840382617 +0000 UTC m=+1268.356741690" Feb 17 16:15:04 crc kubenswrapper[4808]: I0217 16:15:04.851029 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=18.851002285 podStartE2EDuration="18.851002285s" podCreationTimestamp="2026-02-17 16:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:04.847923421 +0000 UTC m=+1268.364282494" watchObservedRunningTime="2026-02-17 16:15:04.851002285 +0000 UTC m=+1268.367361358" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.772687 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.775026 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-log" containerID="cri-o://98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2" gracePeriod=30 Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.775098 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f547a16d-87f8-4ee7-96a5-c4039bfdb453","Type":"ContainerStarted","Data":"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795"} Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.775180 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-httpd" containerID="cri-o://4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795" gracePeriod=30 Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.785092 4808 generic.go:334] "Generic (PLEG): container finished" podID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerID="25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b" exitCode=0 Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.785137 4808 generic.go:334] "Generic (PLEG): container finished" podID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerID="8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716" exitCode=143 Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.785234 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"03b7a5d2-f785-4f3f-962d-b82b7d922dde","Type":"ContainerDied","Data":"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b"} Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.785742 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"03b7a5d2-f785-4f3f-962d-b82b7d922dde","Type":"ContainerDied","Data":"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716"} Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.785787 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"03b7a5d2-f785-4f3f-962d-b82b7d922dde","Type":"ContainerDied","Data":"7582431cc96f656a76c273158d6a6121cb9dd22056c9bc46740b2c3ec436de2b"} Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.785819 4808 scope.go:117] "RemoveContainer" containerID="25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.785685 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.799507 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jcqjf" event={"ID":"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026","Type":"ContainerStarted","Data":"605854da0374a1e089d7a0c7ad0840ab1318edc5017bc1e2125f207c2fb40b06"} Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.816788 4808 generic.go:334] "Generic (PLEG): container finished" podID="41f86f53-7772-428e-b916-8624c83de123" containerID="af2c8b60da9d5276edbe2e0351b8e1093617fb76e21f063ad9744c8103bb6313" exitCode=0 Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.816855 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" event={"ID":"41f86f53-7772-428e-b916-8624c83de123","Type":"ContainerDied","Data":"af2c8b60da9d5276edbe2e0351b8e1093617fb76e21f063ad9744c8103bb6313"} Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.824396 4808 generic.go:334] "Generic (PLEG): container finished" podID="b7820c3c-fe38-46dd-906a-498a579d0805" containerID="8d303380763eeeb183dbe5ad17a24b48fb7b4e5af84df78d3904d5c4c2cf91f7" exitCode=0 Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.825337 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d52vg" event={"ID":"b7820c3c-fe38-46dd-906a-498a579d0805","Type":"ContainerDied","Data":"8d303380763eeeb183dbe5ad17a24b48fb7b4e5af84df78d3904d5c4c2cf91f7"} Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.835632 4808 scope.go:117] "RemoveContainer" containerID="8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.847560 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=19.847541746 podStartE2EDuration="19.847541746s" podCreationTimestamp="2026-02-17 16:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:05.834848182 +0000 UTC m=+1269.351207255" watchObservedRunningTime="2026-02-17 16:15:05.847541746 +0000 UTC m=+1269.363900819" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.906762 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-jcqjf" podStartSLOduration=2.961862571 podStartE2EDuration="46.906742869s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="2026-02-17 16:14:20.583674336 +0000 UTC m=+1224.100033409" lastFinishedPulling="2026-02-17 16:15:04.528554634 +0000 UTC m=+1268.044913707" observedRunningTime="2026-02-17 16:15:05.902638358 +0000 UTC m=+1269.418997431" watchObservedRunningTime="2026-02-17 16:15:05.906742869 +0000 UTC m=+1269.423101952" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909183 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-combined-ca-bundle\") pod \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909365 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909398 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkqj5\" (UniqueName: \"kubernetes.io/projected/03b7a5d2-f785-4f3f-962d-b82b7d922dde-kube-api-access-mkqj5\") pod \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909447 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-logs\") pod \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909501 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-config-data\") pod \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909566 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-httpd-run\") pod \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909651 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-scripts\") pod \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\" (UID: \"03b7a5d2-f785-4f3f-962d-b82b7d922dde\") " Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.909935 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-logs" (OuterVolumeSpecName: "logs") pod "03b7a5d2-f785-4f3f-962d-b82b7d922dde" (UID: "03b7a5d2-f785-4f3f-962d-b82b7d922dde"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.910683 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.910943 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "03b7a5d2-f785-4f3f-962d-b82b7d922dde" (UID: "03b7a5d2-f785-4f3f-962d-b82b7d922dde"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.922795 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-scripts" (OuterVolumeSpecName: "scripts") pod "03b7a5d2-f785-4f3f-962d-b82b7d922dde" (UID: "03b7a5d2-f785-4f3f-962d-b82b7d922dde"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.922928 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b7a5d2-f785-4f3f-962d-b82b7d922dde-kube-api-access-mkqj5" (OuterVolumeSpecName: "kube-api-access-mkqj5") pod "03b7a5d2-f785-4f3f-962d-b82b7d922dde" (UID: "03b7a5d2-f785-4f3f-962d-b82b7d922dde"). InnerVolumeSpecName "kube-api-access-mkqj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.923053 4808 scope.go:117] "RemoveContainer" containerID="25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b" Feb 17 16:15:05 crc kubenswrapper[4808]: E0217 16:15:05.923731 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b\": container with ID starting with 25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b not found: ID does not exist" containerID="25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.923760 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b"} err="failed to get container status \"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b\": rpc error: code = NotFound desc = could not find container \"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b\": container with ID starting with 25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b not found: ID does not exist" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.923787 4808 scope.go:117] "RemoveContainer" containerID="8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716" Feb 17 16:15:05 crc kubenswrapper[4808]: E0217 16:15:05.926118 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716\": container with ID starting with 8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716 not found: ID does not exist" containerID="8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.926158 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716"} err="failed to get container status \"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716\": rpc error: code = NotFound desc = could not find container \"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716\": container with ID starting with 8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716 not found: ID does not exist" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.926215 4808 scope.go:117] "RemoveContainer" containerID="25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.926607 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b"} err="failed to get container status \"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b\": rpc error: code = NotFound desc = could not find container \"25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b\": container with ID starting with 25de9ada2140932663cc119067041efca1131c57d9655bb4cb7717162f43201b not found: ID does not exist" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.926633 4808 scope.go:117] "RemoveContainer" containerID="8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.927008 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716"} err="failed to get container status \"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716\": rpc error: code = NotFound desc = could not find container \"8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716\": container with ID starting with 8656e3c9fa45f0ac52f9b29a68303796673607ed203072b87aa029326ec96716 not found: ID does not exist" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.938515 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522" (OuterVolumeSpecName: "glance") pod "03b7a5d2-f785-4f3f-962d-b82b7d922dde" (UID: "03b7a5d2-f785-4f3f-962d-b82b7d922dde"). InnerVolumeSpecName "pvc-2d669ca1-f580-41d6-88d3-29cb32d20522". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.972057 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03b7a5d2-f785-4f3f-962d-b82b7d922dde" (UID: "03b7a5d2-f785-4f3f-962d-b82b7d922dde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:05 crc kubenswrapper[4808]: I0217 16:15:05.976046 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-config-data" (OuterVolumeSpecName: "config-data") pod "03b7a5d2-f785-4f3f-962d-b82b7d922dde" (UID: "03b7a5d2-f785-4f3f-962d-b82b7d922dde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.012958 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.013034 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") on node \"crc\" " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.013047 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkqj5\" (UniqueName: \"kubernetes.io/projected/03b7a5d2-f785-4f3f-962d-b82b7d922dde-kube-api-access-mkqj5\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.013058 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.013069 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03b7a5d2-f785-4f3f-962d-b82b7d922dde-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.013481 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03b7a5d2-f785-4f3f-962d-b82b7d922dde-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.056211 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.056519 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2d669ca1-f580-41d6-88d3-29cb32d20522" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522") on node "crc" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.119373 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.146537 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.176648 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.189715 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:15:06 crc kubenswrapper[4808]: E0217 16:15:06.190166 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-log" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.190179 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-log" Feb 17 16:15:06 crc kubenswrapper[4808]: E0217 16:15:06.190197 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-httpd" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.190205 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-httpd" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.190372 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-log" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.190394 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" containerName="glance-httpd" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.194055 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.201049 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.201272 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.240739 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.264588 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.323317 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-credential-keys\") pod \"bb977bed-804c-4e4c-8d35-5562015024f3\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.323443 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-config-data\") pod \"bb977bed-804c-4e4c-8d35-5562015024f3\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.323494 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-scripts\") pod \"bb977bed-804c-4e4c-8d35-5562015024f3\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.323531 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h27j8\" (UniqueName: \"kubernetes.io/projected/bb977bed-804c-4e4c-8d35-5562015024f3-kube-api-access-h27j8\") pod \"bb977bed-804c-4e4c-8d35-5562015024f3\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.323728 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-fernet-keys\") pod \"bb977bed-804c-4e4c-8d35-5562015024f3\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.323762 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-combined-ca-bundle\") pod \"bb977bed-804c-4e4c-8d35-5562015024f3\" (UID: \"bb977bed-804c-4e4c-8d35-5562015024f3\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324051 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-logs\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324089 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324143 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324199 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324333 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-scripts\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324381 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2l72\" (UniqueName: \"kubernetes.io/projected/311ff62c-be53-44b9-a2f7-933e94d8dfb1-kube-api-access-v2l72\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.324439 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-config-data\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.333241 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bb977bed-804c-4e4c-8d35-5562015024f3" (UID: "bb977bed-804c-4e4c-8d35-5562015024f3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.334111 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-scripts" (OuterVolumeSpecName: "scripts") pod "bb977bed-804c-4e4c-8d35-5562015024f3" (UID: "bb977bed-804c-4e4c-8d35-5562015024f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.360900 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb977bed-804c-4e4c-8d35-5562015024f3-kube-api-access-h27j8" (OuterVolumeSpecName: "kube-api-access-h27j8") pod "bb977bed-804c-4e4c-8d35-5562015024f3" (UID: "bb977bed-804c-4e4c-8d35-5562015024f3"). InnerVolumeSpecName "kube-api-access-h27j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.361206 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bb977bed-804c-4e4c-8d35-5562015024f3" (UID: "bb977bed-804c-4e4c-8d35-5562015024f3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.365416 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-config-data" (OuterVolumeSpecName: "config-data") pod "bb977bed-804c-4e4c-8d35-5562015024f3" (UID: "bb977bed-804c-4e4c-8d35-5562015024f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.366436 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb977bed-804c-4e4c-8d35-5562015024f3" (UID: "bb977bed-804c-4e4c-8d35-5562015024f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426287 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426373 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426441 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426483 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-scripts\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426524 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2l72\" (UniqueName: \"kubernetes.io/projected/311ff62c-be53-44b9-a2f7-933e94d8dfb1-kube-api-access-v2l72\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426594 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-config-data\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426648 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-logs\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426674 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426742 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426757 4808 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426769 4808 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426781 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426792 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb977bed-804c-4e4c-8d35-5562015024f3-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.426804 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h27j8\" (UniqueName: \"kubernetes.io/projected/bb977bed-804c-4e4c-8d35-5562015024f3-kube-api-access-h27j8\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.429976 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.430194 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-logs\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.431125 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-scripts\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.433178 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.435228 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.435876 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.435903 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/793125420e976eb43638bc1f8c10c1dbf19200ea40f241dea1aa3deff96042e8/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.436436 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-config-data\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.447204 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2l72\" (UniqueName: \"kubernetes.io/projected/311ff62c-be53-44b9-a2f7-933e94d8dfb1-kube-api-access-v2l72\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.468473 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.491789 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.579650 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.669814 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-scripts\") pod \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.669923 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-combined-ca-bundle\") pod \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.669998 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-config-data\") pod \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.670029 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-httpd-run\") pod \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.670063 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-logs\") pod \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.670099 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fc4x\" (UniqueName: \"kubernetes.io/projected/f547a16d-87f8-4ee7-96a5-c4039bfdb453-kube-api-access-7fc4x\") pod \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.670327 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\" (UID: \"f547a16d-87f8-4ee7-96a5-c4039bfdb453\") " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.672019 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f547a16d-87f8-4ee7-96a5-c4039bfdb453" (UID: "f547a16d-87f8-4ee7-96a5-c4039bfdb453"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.673410 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-logs" (OuterVolumeSpecName: "logs") pod "f547a16d-87f8-4ee7-96a5-c4039bfdb453" (UID: "f547a16d-87f8-4ee7-96a5-c4039bfdb453"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.682201 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f547a16d-87f8-4ee7-96a5-c4039bfdb453-kube-api-access-7fc4x" (OuterVolumeSpecName: "kube-api-access-7fc4x") pod "f547a16d-87f8-4ee7-96a5-c4039bfdb453" (UID: "f547a16d-87f8-4ee7-96a5-c4039bfdb453"). InnerVolumeSpecName "kube-api-access-7fc4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.684591 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-scripts" (OuterVolumeSpecName: "scripts") pod "f547a16d-87f8-4ee7-96a5-c4039bfdb453" (UID: "f547a16d-87f8-4ee7-96a5-c4039bfdb453"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.689433 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.690251 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c" (OuterVolumeSpecName: "glance") pod "f547a16d-87f8-4ee7-96a5-c4039bfdb453" (UID: "f547a16d-87f8-4ee7-96a5-c4039bfdb453"). InnerVolumeSpecName "pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.729150 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f547a16d-87f8-4ee7-96a5-c4039bfdb453" (UID: "f547a16d-87f8-4ee7-96a5-c4039bfdb453"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.773940 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.773976 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f547a16d-87f8-4ee7-96a5-c4039bfdb453-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.773988 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fc4x\" (UniqueName: \"kubernetes.io/projected/f547a16d-87f8-4ee7-96a5-c4039bfdb453-kube-api-access-7fc4x\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.774017 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") on node \"crc\" " Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.774029 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.774040 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.783880 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-config-data" (OuterVolumeSpecName: "config-data") pod "f547a16d-87f8-4ee7-96a5-c4039bfdb453" (UID: "f547a16d-87f8-4ee7-96a5-c4039bfdb453"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.808842 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bbhtn"] Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.809233 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" podUID="ac763412-39e7-40d0-892a-57ac801af2bb" containerName="dnsmasq-dns" containerID="cri-o://efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9" gracePeriod=10 Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.854780 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.855240 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c") on node "crc" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.883963 4808 generic.go:334] "Generic (PLEG): container finished" podID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerID="4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795" exitCode=0 Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.884009 4808 generic.go:334] "Generic (PLEG): container finished" podID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerID="98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2" exitCode=143 Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.884153 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.884549 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f547a16d-87f8-4ee7-96a5-c4039bfdb453","Type":"ContainerDied","Data":"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795"} Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.884604 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f547a16d-87f8-4ee7-96a5-c4039bfdb453","Type":"ContainerDied","Data":"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2"} Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.884621 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f547a16d-87f8-4ee7-96a5-c4039bfdb453","Type":"ContainerDied","Data":"c10fc6d6f2a4869db9fa18326dfe2683218bcdc439daca6286604be99d676aab"} Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.884639 4808 scope.go:117] "RemoveContainer" containerID="4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.885697 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.885905 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f547a16d-87f8-4ee7-96a5-c4039bfdb453-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.914482 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-wdrmd" event={"ID":"2ec52dbb-ca2f-4013-8536-972042607240","Type":"ContainerStarted","Data":"a81fffa1dbaddd4905f2490f1b43e8825142981115e721e7e79501c10a7af652"} Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.936811 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-67f4b" Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.937461 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-67f4b" event={"ID":"bb977bed-804c-4e4c-8d35-5562015024f3","Type":"ContainerDied","Data":"c81162eb89cbecee97cfac1cc5229cbf6b84ca62ed280abed73ac2d3607e8880"} Feb 17 16:15:06 crc kubenswrapper[4808]: I0217 16:15:06.937553 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c81162eb89cbecee97cfac1cc5229cbf6b84ca62ed280abed73ac2d3607e8880" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.000388 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.000833 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.020598 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-wdrmd" podStartSLOduration=2.805470155 podStartE2EDuration="48.020564586s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="2026-02-17 16:14:21.193016813 +0000 UTC m=+1224.709375886" lastFinishedPulling="2026-02-17 16:15:06.408111244 +0000 UTC m=+1269.924470317" observedRunningTime="2026-02-17 16:15:06.968302032 +0000 UTC m=+1270.484661105" watchObservedRunningTime="2026-02-17 16:15:07.020564586 +0000 UTC m=+1270.536923659" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.026907 4808 scope.go:117] "RemoveContainer" containerID="98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.045281 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:15:07 crc kubenswrapper[4808]: E0217 16:15:07.045712 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-log" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.045730 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-log" Feb 17 16:15:07 crc kubenswrapper[4808]: E0217 16:15:07.045755 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-httpd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.045763 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-httpd" Feb 17 16:15:07 crc kubenswrapper[4808]: E0217 16:15:07.045776 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb977bed-804c-4e4c-8d35-5562015024f3" containerName="keystone-bootstrap" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.045783 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb977bed-804c-4e4c-8d35-5562015024f3" containerName="keystone-bootstrap" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.045958 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-httpd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.045974 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" containerName="glance-log" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.045989 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb977bed-804c-4e4c-8d35-5562015024f3" containerName="keystone-bootstrap" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.057392 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.057840 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.074871 4808 scope.go:117] "RemoveContainer" containerID="4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.075500 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.075705 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:15:07 crc kubenswrapper[4808]: E0217 16:15:07.092388 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795\": container with ID starting with 4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795 not found: ID does not exist" containerID="4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.092418 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795"} err="failed to get container status \"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795\": rpc error: code = NotFound desc = could not find container \"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795\": container with ID starting with 4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795 not found: ID does not exist" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.092442 4808 scope.go:117] "RemoveContainer" containerID="98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2" Feb 17 16:15:07 crc kubenswrapper[4808]: E0217 16:15:07.093006 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2\": container with ID starting with 98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2 not found: ID does not exist" containerID="98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.093049 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2"} err="failed to get container status \"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2\": rpc error: code = NotFound desc = could not find container \"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2\": container with ID starting with 98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2 not found: ID does not exist" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.093093 4808 scope.go:117] "RemoveContainer" containerID="4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.093354 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795"} err="failed to get container status \"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795\": rpc error: code = NotFound desc = could not find container \"4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795\": container with ID starting with 4bbef9953a9c9890b80dda3c9f4babd7fbeefce28d6383ea9729de6c043c3795 not found: ID does not exist" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.093373 4808 scope.go:117] "RemoveContainer" containerID="98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.093551 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2"} err="failed to get container status \"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2\": rpc error: code = NotFound desc = could not find container \"98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2\": container with ID starting with 98730bd34bd002dd75d1fca6da0a1fce856a905d55bcd7e32dc87a631af01ed2 not found: ID does not exist" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.192513 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03b7a5d2-f785-4f3f-962d-b82b7d922dde" path="/var/lib/kubelet/pods/03b7a5d2-f785-4f3f-962d-b82b7d922dde/volumes" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.193346 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f547a16d-87f8-4ee7-96a5-c4039bfdb453" path="/var/lib/kubelet/pods/f547a16d-87f8-4ee7-96a5-c4039bfdb453/volumes" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.194087 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-679dfcbbb9-npbsd"] Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.199204 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-679dfcbbb9-npbsd"] Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.199282 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.205879 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.206047 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.206114 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6x2tm" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.206192 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.206273 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.206436 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.208485 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.208829 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.209157 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.209334 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.209453 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wngfm\" (UniqueName: \"kubernetes.io/projected/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-kube-api-access-wngfm\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.209629 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.209933 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.215113 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.266262 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316734 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-scripts\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316783 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316817 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316841 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-fernet-keys\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316891 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316912 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-internal-tls-certs\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316944 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316961 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-credential-keys\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.316986 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wngfm\" (UniqueName: \"kubernetes.io/projected/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-kube-api-access-wngfm\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.317010 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvpjm\" (UniqueName: \"kubernetes.io/projected/8a521aa0-4048-49a0-b6c1-32e07f349ac5-kube-api-access-xvpjm\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.317036 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.317079 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-config-data\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.317098 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-public-tls-certs\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.317124 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.317144 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.317161 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-combined-ca-bundle\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.322219 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.324414 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.330777 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.331029 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.331840 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.338862 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.338914 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/babb0a58e49abb7abbb526a723d7265132519584485959e000cf4b8b02c96a84/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.341907 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.342460 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wngfm\" (UniqueName: \"kubernetes.io/projected/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-kube-api-access-wngfm\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.421647 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvpjm\" (UniqueName: \"kubernetes.io/projected/8a521aa0-4048-49a0-b6c1-32e07f349ac5-kube-api-access-xvpjm\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.422489 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-config-data\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.422524 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-public-tls-certs\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.422592 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-combined-ca-bundle\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.422637 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-scripts\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.422680 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-fernet-keys\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.422760 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-internal-tls-certs\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.422800 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-credential-keys\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.429824 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-credential-keys\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.430481 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-public-tls-certs\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.436217 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-fernet-keys\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.439296 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-config-data\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.443984 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.448408 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-combined-ca-bundle\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.450904 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-scripts\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.472194 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvpjm\" (UniqueName: \"kubernetes.io/projected/8a521aa0-4048-49a0-b6c1-32e07f349ac5-kube-api-access-xvpjm\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.484825 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a521aa0-4048-49a0-b6c1-32e07f349ac5-internal-tls-certs\") pod \"keystone-679dfcbbb9-npbsd\" (UID: \"8a521aa0-4048-49a0-b6c1-32e07f349ac5\") " pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.507339 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.530025 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.786994 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.852791 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41f86f53-7772-428e-b916-8624c83de123-secret-volume\") pod \"41f86f53-7772-428e-b916-8624c83de123\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.852857 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41f86f53-7772-428e-b916-8624c83de123-config-volume\") pod \"41f86f53-7772-428e-b916-8624c83de123\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.852998 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg4tp\" (UniqueName: \"kubernetes.io/projected/41f86f53-7772-428e-b916-8624c83de123-kube-api-access-zg4tp\") pod \"41f86f53-7772-428e-b916-8624c83de123\" (UID: \"41f86f53-7772-428e-b916-8624c83de123\") " Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.854269 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f86f53-7772-428e-b916-8624c83de123-config-volume" (OuterVolumeSpecName: "config-volume") pod "41f86f53-7772-428e-b916-8624c83de123" (UID: "41f86f53-7772-428e-b916-8624c83de123"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.857609 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f86f53-7772-428e-b916-8624c83de123-kube-api-access-zg4tp" (OuterVolumeSpecName: "kube-api-access-zg4tp") pod "41f86f53-7772-428e-b916-8624c83de123" (UID: "41f86f53-7772-428e-b916-8624c83de123"). InnerVolumeSpecName "kube-api-access-zg4tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.857769 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f86f53-7772-428e-b916-8624c83de123-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "41f86f53-7772-428e-b916-8624c83de123" (UID: "41f86f53-7772-428e-b916-8624c83de123"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.909714 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.939320 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d52vg" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.957080 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41f86f53-7772-428e-b916-8624c83de123-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.957110 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41f86f53-7772-428e-b916-8624c83de123-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.957124 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg4tp\" (UniqueName: \"kubernetes.io/projected/41f86f53-7772-428e-b916-8624c83de123-kube-api-access-zg4tp\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.973603 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.973607 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh" event={"ID":"41f86f53-7772-428e-b916-8624c83de123","Type":"ContainerDied","Data":"bbb87748ac53790d547ebe98fbf611fde3c6a82de7d4e177315d64123d64ebf9"} Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.973830 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbb87748ac53790d547ebe98fbf611fde3c6a82de7d4e177315d64123d64ebf9" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.978674 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-d52vg" Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.978804 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-d52vg" event={"ID":"b7820c3c-fe38-46dd-906a-498a579d0805","Type":"ContainerDied","Data":"5b531905add091d4dfe9c3b871669f1b4764b98e78ffc02ea10bcfde5b754358"} Feb 17 16:15:07 crc kubenswrapper[4808]: I0217 16:15:07.978841 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b531905add091d4dfe9c3b871669f1b4764b98e78ffc02ea10bcfde5b754358" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.061895 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-sb\") pod \"ac763412-39e7-40d0-892a-57ac801af2bb\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.061962 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-svc\") pod \"ac763412-39e7-40d0-892a-57ac801af2bb\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062006 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-scripts\") pod \"b7820c3c-fe38-46dd-906a-498a579d0805\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062081 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-swift-storage-0\") pod \"ac763412-39e7-40d0-892a-57ac801af2bb\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062157 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7820c3c-fe38-46dd-906a-498a579d0805-logs\") pod \"b7820c3c-fe38-46dd-906a-498a579d0805\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062203 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-config-data\") pod \"b7820c3c-fe38-46dd-906a-498a579d0805\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062247 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz8lw\" (UniqueName: \"kubernetes.io/projected/ac763412-39e7-40d0-892a-57ac801af2bb-kube-api-access-zz8lw\") pod \"ac763412-39e7-40d0-892a-57ac801af2bb\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062383 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-combined-ca-bundle\") pod \"b7820c3c-fe38-46dd-906a-498a579d0805\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062415 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bzxr\" (UniqueName: \"kubernetes.io/projected/b7820c3c-fe38-46dd-906a-498a579d0805-kube-api-access-7bzxr\") pod \"b7820c3c-fe38-46dd-906a-498a579d0805\" (UID: \"b7820c3c-fe38-46dd-906a-498a579d0805\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062477 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-nb\") pod \"ac763412-39e7-40d0-892a-57ac801af2bb\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.062527 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-config\") pod \"ac763412-39e7-40d0-892a-57ac801af2bb\" (UID: \"ac763412-39e7-40d0-892a-57ac801af2bb\") " Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.063467 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7820c3c-fe38-46dd-906a-498a579d0805-logs" (OuterVolumeSpecName: "logs") pod "b7820c3c-fe38-46dd-906a-498a579d0805" (UID: "b7820c3c-fe38-46dd-906a-498a579d0805"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.066056 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7820c3c-fe38-46dd-906a-498a579d0805-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.067433 4808 generic.go:334] "Generic (PLEG): container finished" podID="ac763412-39e7-40d0-892a-57ac801af2bb" containerID="efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9" exitCode=0 Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.067666 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.067793 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" event={"ID":"ac763412-39e7-40d0-892a-57ac801af2bb","Type":"ContainerDied","Data":"efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9"} Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.067910 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-bbhtn" event={"ID":"ac763412-39e7-40d0-892a-57ac801af2bb","Type":"ContainerDied","Data":"027ce35e95410cc92a867a6b938a45485c623b5bfa8d8827b979b970dbe86f22"} Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.070379 4808 scope.go:117] "RemoveContainer" containerID="efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.082880 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac763412-39e7-40d0-892a-57ac801af2bb-kube-api-access-zz8lw" (OuterVolumeSpecName: "kube-api-access-zz8lw") pod "ac763412-39e7-40d0-892a-57ac801af2bb" (UID: "ac763412-39e7-40d0-892a-57ac801af2bb"). InnerVolumeSpecName "kube-api-access-zz8lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.086753 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7820c3c-fe38-46dd-906a-498a579d0805-kube-api-access-7bzxr" (OuterVolumeSpecName: "kube-api-access-7bzxr") pod "b7820c3c-fe38-46dd-906a-498a579d0805" (UID: "b7820c3c-fe38-46dd-906a-498a579d0805"). InnerVolumeSpecName "kube-api-access-7bzxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.088298 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-scripts" (OuterVolumeSpecName: "scripts") pod "b7820c3c-fe38-46dd-906a-498a579d0805" (UID: "b7820c3c-fe38-46dd-906a-498a579d0805"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.125896 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac763412-39e7-40d0-892a-57ac801af2bb" (UID: "ac763412-39e7-40d0-892a-57ac801af2bb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.143945 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"311ff62c-be53-44b9-a2f7-933e94d8dfb1","Type":"ContainerStarted","Data":"5259b7f9e5eb8d16dd9b6467f0a2e9d1eee838ac2578fd7225262f0187ce85fa"} Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.150674 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-config-data" (OuterVolumeSpecName: "config-data") pod "b7820c3c-fe38-46dd-906a-498a579d0805" (UID: "b7820c3c-fe38-46dd-906a-498a579d0805"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.162553 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7820c3c-fe38-46dd-906a-498a579d0805" (UID: "b7820c3c-fe38-46dd-906a-498a579d0805"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.168092 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.168109 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz8lw\" (UniqueName: \"kubernetes.io/projected/ac763412-39e7-40d0-892a-57ac801af2bb-kube-api-access-zz8lw\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.168121 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.168130 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bzxr\" (UniqueName: \"kubernetes.io/projected/b7820c3c-fe38-46dd-906a-498a579d0805-kube-api-access-7bzxr\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.168140 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.168148 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7820c3c-fe38-46dd-906a-498a579d0805-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.183999 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-config" (OuterVolumeSpecName: "config") pod "ac763412-39e7-40d0-892a-57ac801af2bb" (UID: "ac763412-39e7-40d0-892a-57ac801af2bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.198198 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac763412-39e7-40d0-892a-57ac801af2bb" (UID: "ac763412-39e7-40d0-892a-57ac801af2bb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.198377 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac763412-39e7-40d0-892a-57ac801af2bb" (UID: "ac763412-39e7-40d0-892a-57ac801af2bb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.136643 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ac763412-39e7-40d0-892a-57ac801af2bb" (UID: "ac763412-39e7-40d0-892a-57ac801af2bb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.272934 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.275953 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.276047 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.276114 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac763412-39e7-40d0-892a-57ac801af2bb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.296032 4808 scope.go:117] "RemoveContainer" containerID="3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.346979 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-679dfcbbb9-npbsd"] Feb 17 16:15:08 crc kubenswrapper[4808]: W0217 16:15:08.355999 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a521aa0_4048_49a0_b6c1_32e07f349ac5.slice/crio-ad14d058aa0dac229a220b344a8765da6ec123e103f1c1521525d11603c01b48 WatchSource:0}: Error finding container ad14d058aa0dac229a220b344a8765da6ec123e103f1c1521525d11603c01b48: Status 404 returned error can't find the container with id ad14d058aa0dac229a220b344a8765da6ec123e103f1c1521525d11603c01b48 Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.370873 4808 scope.go:117] "RemoveContainer" containerID="efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9" Feb 17 16:15:08 crc kubenswrapper[4808]: E0217 16:15:08.371433 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9\": container with ID starting with efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9 not found: ID does not exist" containerID="efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.371460 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9"} err="failed to get container status \"efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9\": rpc error: code = NotFound desc = could not find container \"efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9\": container with ID starting with efb29cb8354ee1065418cb03cb216915e7b1e0246bdd1f63d45fcf6320a29eb9 not found: ID does not exist" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.371480 4808 scope.go:117] "RemoveContainer" containerID="3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0" Feb 17 16:15:08 crc kubenswrapper[4808]: E0217 16:15:08.371735 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0\": container with ID starting with 3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0 not found: ID does not exist" containerID="3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.371750 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0"} err="failed to get container status \"3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0\": rpc error: code = NotFound desc = could not find container \"3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0\": container with ID starting with 3cd5c53464fedd37e9d9819c27c7cd7bc3734963bedd089eb5eac87ece7032f0 not found: ID does not exist" Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.557192 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bbhtn"] Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.583956 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-bbhtn"] Feb 17 16:15:08 crc kubenswrapper[4808]: I0217 16:15:08.613298 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.165317 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac763412-39e7-40d0-892a-57ac801af2bb" path="/var/lib/kubelet/pods/ac763412-39e7-40d0-892a-57ac801af2bb/volumes" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.165909 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.165926 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-679dfcbbb9-npbsd" event={"ID":"8a521aa0-4048-49a0-b6c1-32e07f349ac5","Type":"ContainerStarted","Data":"9b80e856a1484d326bbd785dad5941a60017ee1129bcf6e5805f921083557b78"} Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.165939 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-679dfcbbb9-npbsd" event={"ID":"8a521aa0-4048-49a0-b6c1-32e07f349ac5","Type":"ContainerStarted","Data":"ad14d058aa0dac229a220b344a8765da6ec123e103f1c1521525d11603c01b48"} Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.172759 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"311ff62c-be53-44b9-a2f7-933e94d8dfb1","Type":"ContainerStarted","Data":"ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f"} Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.177156 4808 generic.go:334] "Generic (PLEG): container finished" podID="5bf4d932-664a-46c6-bec5-f2b70950c824" containerID="d13306e7f7b98912b9cc3cb00da949b55a527efdf00a13d4c28a802941f6067a" exitCode=0 Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.177254 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rwld8" event={"ID":"5bf4d932-664a-46c6-bec5-f2b70950c824","Type":"ContainerDied","Data":"d13306e7f7b98912b9cc3cb00da949b55a527efdf00a13d4c28a802941f6067a"} Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.181817 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0","Type":"ContainerStarted","Data":"674bc197545e528a3fae6a8ee441743eba630fd0f6cf0ca9277898370f13b963"} Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.196675 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-679dfcbbb9-npbsd" podStartSLOduration=3.196659065 podStartE2EDuration="3.196659065s" podCreationTimestamp="2026-02-17 16:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:09.195840223 +0000 UTC m=+1272.712199296" watchObservedRunningTime="2026-02-17 16:15:09.196659065 +0000 UTC m=+1272.713018138" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.324567 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-76b995d5cb-7xs25"] Feb 17 16:15:09 crc kubenswrapper[4808]: E0217 16:15:09.325422 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac763412-39e7-40d0-892a-57ac801af2bb" containerName="dnsmasq-dns" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.325450 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac763412-39e7-40d0-892a-57ac801af2bb" containerName="dnsmasq-dns" Feb 17 16:15:09 crc kubenswrapper[4808]: E0217 16:15:09.325460 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7820c3c-fe38-46dd-906a-498a579d0805" containerName="placement-db-sync" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.325476 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7820c3c-fe38-46dd-906a-498a579d0805" containerName="placement-db-sync" Feb 17 16:15:09 crc kubenswrapper[4808]: E0217 16:15:09.325489 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac763412-39e7-40d0-892a-57ac801af2bb" containerName="init" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.325499 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac763412-39e7-40d0-892a-57ac801af2bb" containerName="init" Feb 17 16:15:09 crc kubenswrapper[4808]: E0217 16:15:09.325510 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f86f53-7772-428e-b916-8624c83de123" containerName="collect-profiles" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.325519 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f86f53-7772-428e-b916-8624c83de123" containerName="collect-profiles" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.325801 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f86f53-7772-428e-b916-8624c83de123" containerName="collect-profiles" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.325847 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7820c3c-fe38-46dd-906a-498a579d0805" containerName="placement-db-sync" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.325866 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac763412-39e7-40d0-892a-57ac801af2bb" containerName="dnsmasq-dns" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.327279 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.330889 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.336114 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.336289 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.336384 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-p4pcv" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.336486 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.343738 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-76b995d5cb-7xs25"] Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.401487 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-combined-ca-bundle\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.401556 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-scripts\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.401627 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-config-data\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.401659 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msmrh\" (UniqueName: \"kubernetes.io/projected/ab7f0766-47a0-4616-b6dc-32957d59188a-kube-api-access-msmrh\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.401693 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-public-tls-certs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.401736 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-internal-tls-certs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.401779 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7f0766-47a0-4616-b6dc-32957d59188a-logs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506136 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7f0766-47a0-4616-b6dc-32957d59188a-logs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506305 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-combined-ca-bundle\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506353 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-scripts\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506430 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-config-data\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506450 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msmrh\" (UniqueName: \"kubernetes.io/projected/ab7f0766-47a0-4616-b6dc-32957d59188a-kube-api-access-msmrh\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506484 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-public-tls-certs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506546 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-internal-tls-certs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.506624 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab7f0766-47a0-4616-b6dc-32957d59188a-logs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.512338 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-internal-tls-certs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.513241 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-config-data\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.515086 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-scripts\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.515167 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-public-tls-certs\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.517035 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab7f0766-47a0-4616-b6dc-32957d59188a-combined-ca-bundle\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.529950 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msmrh\" (UniqueName: \"kubernetes.io/projected/ab7f0766-47a0-4616-b6dc-32957d59188a-kube-api-access-msmrh\") pod \"placement-76b995d5cb-7xs25\" (UID: \"ab7f0766-47a0-4616-b6dc-32957d59188a\") " pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:09 crc kubenswrapper[4808]: I0217 16:15:09.653363 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:10 crc kubenswrapper[4808]: I0217 16:15:10.196117 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"311ff62c-be53-44b9-a2f7-933e94d8dfb1","Type":"ContainerStarted","Data":"ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5"} Feb 17 16:15:10 crc kubenswrapper[4808]: I0217 16:15:10.209726 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0","Type":"ContainerStarted","Data":"177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e"} Feb 17 16:15:10 crc kubenswrapper[4808]: I0217 16:15:10.209765 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0","Type":"ContainerStarted","Data":"93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674"} Feb 17 16:15:10 crc kubenswrapper[4808]: I0217 16:15:10.234293 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.234270678 podStartE2EDuration="4.234270678s" podCreationTimestamp="2026-02-17 16:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:10.230686811 +0000 UTC m=+1273.747045894" watchObservedRunningTime="2026-02-17 16:15:10.234270678 +0000 UTC m=+1273.750629751" Feb 17 16:15:10 crc kubenswrapper[4808]: I0217 16:15:10.255591 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.255555805 podStartE2EDuration="4.255555805s" podCreationTimestamp="2026-02-17 16:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:10.25425364 +0000 UTC m=+1273.770612723" watchObservedRunningTime="2026-02-17 16:15:10.255555805 +0000 UTC m=+1273.771914878" Feb 17 16:15:10 crc kubenswrapper[4808]: I0217 16:15:10.330634 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-76b995d5cb-7xs25"] Feb 17 16:15:11 crc kubenswrapper[4808]: W0217 16:15:11.994910 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab7f0766_47a0_4616_b6dc_32957d59188a.slice/crio-b48a6abc26c7e221dbdced60372c9a60fa60a080c578e82c39e83edd08b08428 WatchSource:0}: Error finding container b48a6abc26c7e221dbdced60372c9a60fa60a080c578e82c39e83edd08b08428: Status 404 returned error can't find the container with id b48a6abc26c7e221dbdced60372c9a60fa60a080c578e82c39e83edd08b08428 Feb 17 16:15:12 crc kubenswrapper[4808]: I0217 16:15:12.235332 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76b995d5cb-7xs25" event={"ID":"ab7f0766-47a0-4616-b6dc-32957d59188a","Type":"ContainerStarted","Data":"b48a6abc26c7e221dbdced60372c9a60fa60a080c578e82c39e83edd08b08428"} Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.248875 4808 generic.go:334] "Generic (PLEG): container finished" podID="2ec52dbb-ca2f-4013-8536-972042607240" containerID="a81fffa1dbaddd4905f2490f1b43e8825142981115e721e7e79501c10a7af652" exitCode=0 Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.248944 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-wdrmd" event={"ID":"2ec52dbb-ca2f-4013-8536-972042607240","Type":"ContainerDied","Data":"a81fffa1dbaddd4905f2490f1b43e8825142981115e721e7e79501c10a7af652"} Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.252559 4808 generic.go:334] "Generic (PLEG): container finished" podID="d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" containerID="605854da0374a1e089d7a0c7ad0840ab1318edc5017bc1e2125f207c2fb40b06" exitCode=0 Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.252635 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jcqjf" event={"ID":"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026","Type":"ContainerDied","Data":"605854da0374a1e089d7a0c7ad0840ab1318edc5017bc1e2125f207c2fb40b06"} Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.564854 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rwld8" Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.700115 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-db-sync-config-data\") pod \"5bf4d932-664a-46c6-bec5-f2b70950c824\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.700163 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zvc8\" (UniqueName: \"kubernetes.io/projected/5bf4d932-664a-46c6-bec5-f2b70950c824-kube-api-access-2zvc8\") pod \"5bf4d932-664a-46c6-bec5-f2b70950c824\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.700191 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-combined-ca-bundle\") pod \"5bf4d932-664a-46c6-bec5-f2b70950c824\" (UID: \"5bf4d932-664a-46c6-bec5-f2b70950c824\") " Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.703620 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5bf4d932-664a-46c6-bec5-f2b70950c824" (UID: "5bf4d932-664a-46c6-bec5-f2b70950c824"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.704206 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bf4d932-664a-46c6-bec5-f2b70950c824-kube-api-access-2zvc8" (OuterVolumeSpecName: "kube-api-access-2zvc8") pod "5bf4d932-664a-46c6-bec5-f2b70950c824" (UID: "5bf4d932-664a-46c6-bec5-f2b70950c824"). InnerVolumeSpecName "kube-api-access-2zvc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.731237 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bf4d932-664a-46c6-bec5-f2b70950c824" (UID: "5bf4d932-664a-46c6-bec5-f2b70950c824"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.803510 4808 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.803821 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zvc8\" (UniqueName: \"kubernetes.io/projected/5bf4d932-664a-46c6-bec5-f2b70950c824-kube-api-access-2zvc8\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:13 crc kubenswrapper[4808]: I0217 16:15:13.803837 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bf4d932-664a-46c6-bec5-f2b70950c824-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.275983 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerStarted","Data":"5ae1963ac1b0852c4683f5358c8722c23e5499fa516e84308b0247d589ec8967"} Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.278494 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rwld8" event={"ID":"5bf4d932-664a-46c6-bec5-f2b70950c824","Type":"ContainerDied","Data":"9ba656f842dfb00605cd2712c9679dadbf966fdee137e5405e4ec802b02357c9"} Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.278546 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ba656f842dfb00605cd2712c9679dadbf966fdee137e5405e4ec802b02357c9" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.278674 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rwld8" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.284596 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76b995d5cb-7xs25" event={"ID":"ab7f0766-47a0-4616-b6dc-32957d59188a","Type":"ContainerStarted","Data":"1ac5810a1c1e5917de8eae77f2195ae692569c3a3124154a08bc9b36894f6566"} Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.284663 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-76b995d5cb-7xs25" event={"ID":"ab7f0766-47a0-4616-b6dc-32957d59188a","Type":"ContainerStarted","Data":"94683a775902e76377bb4a1d51e3c26fa151e5d1d30203b370523ab19d1a4405"} Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.284877 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.284929 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.316691 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-76b995d5cb-7xs25" podStartSLOduration=5.316674262 podStartE2EDuration="5.316674262s" podCreationTimestamp="2026-02-17 16:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:14.315937242 +0000 UTC m=+1277.832296335" watchObservedRunningTime="2026-02-17 16:15:14.316674262 +0000 UTC m=+1277.833033345" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.796584 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.803546 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927092 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-etc-machine-id\") pod \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927442 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jmms\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-kube-api-access-5jmms\") pod \"2ec52dbb-ca2f-4013-8536-972042607240\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927463 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mc46\" (UniqueName: \"kubernetes.io/projected/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-kube-api-access-9mc46\") pod \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927551 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-db-sync-config-data\") pod \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927611 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-config-data\") pod \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927640 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-scripts\") pod \"2ec52dbb-ca2f-4013-8536-972042607240\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927695 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-config-data\") pod \"2ec52dbb-ca2f-4013-8536-972042607240\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927752 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-combined-ca-bundle\") pod \"2ec52dbb-ca2f-4013-8536-972042607240\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927783 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-combined-ca-bundle\") pod \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927802 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-scripts\") pod \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\" (UID: \"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.927864 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-certs\") pod \"2ec52dbb-ca2f-4013-8536-972042607240\" (UID: \"2ec52dbb-ca2f-4013-8536-972042607240\") " Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.928541 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" (UID: "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.928893 4808 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.942028 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-scripts" (OuterVolumeSpecName: "scripts") pod "2ec52dbb-ca2f-4013-8536-972042607240" (UID: "2ec52dbb-ca2f-4013-8536-972042607240"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.960681 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-kube-api-access-5jmms" (OuterVolumeSpecName: "kube-api-access-5jmms") pod "2ec52dbb-ca2f-4013-8536-972042607240" (UID: "2ec52dbb-ca2f-4013-8536-972042607240"). InnerVolumeSpecName "kube-api-access-5jmms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.960861 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" (UID: "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.962346 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6d78867d94-7lhqs"] Feb 17 16:15:14 crc kubenswrapper[4808]: E0217 16:15:14.962912 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bf4d932-664a-46c6-bec5-f2b70950c824" containerName="barbican-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.962938 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bf4d932-664a-46c6-bec5-f2b70950c824" containerName="barbican-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: E0217 16:15:14.962949 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec52dbb-ca2f-4013-8536-972042607240" containerName="cloudkitty-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.962957 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec52dbb-ca2f-4013-8536-972042607240" containerName="cloudkitty-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: E0217 16:15:14.962979 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" containerName="cinder-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.962986 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" containerName="cinder-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.963173 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" containerName="cinder-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.963190 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec52dbb-ca2f-4013-8536-972042607240" containerName="cloudkitty-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.963205 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bf4d932-664a-46c6-bec5-f2b70950c824" containerName="barbican-db-sync" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.963477 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-kube-api-access-9mc46" (OuterVolumeSpecName: "kube-api-access-9mc46") pod "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" (UID: "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026"). InnerVolumeSpecName "kube-api-access-9mc46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.965156 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.968403 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-certs" (OuterVolumeSpecName: "certs") pod "2ec52dbb-ca2f-4013-8536-972042607240" (UID: "2ec52dbb-ca2f-4013-8536-972042607240"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.969260 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-26x5l" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.969653 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.992871 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-scripts" (OuterVolumeSpecName: "scripts") pod "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" (UID: "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.993140 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 17 16:15:14 crc kubenswrapper[4808]: I0217 16:15:14.999122 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-55f6d995c5-hnz4n"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.001295 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.007738 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.019283 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-config-data" (OuterVolumeSpecName: "config-data") pod "2ec52dbb-ca2f-4013-8536-972042607240" (UID: "2ec52dbb-ca2f-4013-8536-972042607240"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.032445 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" (UID: "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.033798 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.033875 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.033926 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.033976 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.034023 4808 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.034074 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jmms\" (UniqueName: \"kubernetes.io/projected/2ec52dbb-ca2f-4013-8536-972042607240-kube-api-access-5jmms\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.034135 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mc46\" (UniqueName: \"kubernetes.io/projected/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-kube-api-access-9mc46\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.034195 4808 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.036816 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-29sc9"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.038592 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.045476 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d78867d94-7lhqs"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.054470 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ec52dbb-ca2f-4013-8536-972042607240" (UID: "2ec52dbb-ca2f-4013-8536-972042607240"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.073300 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55f6d995c5-hnz4n"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.090793 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-29sc9"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.131400 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-config-data" (OuterVolumeSpecName: "config-data") pod "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" (UID: "d0cc3be3-7aa7-4384-97ed-1ec7bf75f026"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144118 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-svc\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144160 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144183 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-config-data-custom\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144203 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-config-data-custom\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144225 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-config\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144250 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2dxj\" (UniqueName: \"kubernetes.io/projected/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-kube-api-access-q2dxj\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144290 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-config-data\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144309 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990b124d-3558-48ad-87f8-503580da5cc7-logs\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144340 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drb8b\" (UniqueName: \"kubernetes.io/projected/990b124d-3558-48ad-87f8-503580da5cc7-kube-api-access-drb8b\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144361 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-config-data\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144381 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-combined-ca-bundle\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144412 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144443 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144469 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clkmv\" (UniqueName: \"kubernetes.io/projected/6974c05c-8d53-4225-8ccd-c8c7c8956073-kube-api-access-clkmv\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144495 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-combined-ca-bundle\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144512 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-logs\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144559 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.144584 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ec52dbb-ca2f-4013-8536-972042607240-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.195874 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75bd7dcff4-tfcmj"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.205406 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.212448 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.216523 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75bd7dcff4-tfcmj"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.246415 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-svc\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.246822 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.246870 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-config-data-custom\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247254 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-config-data-custom\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247298 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-config\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247376 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2dxj\" (UniqueName: \"kubernetes.io/projected/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-kube-api-access-q2dxj\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247403 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-svc\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247817 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-config-data\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247876 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990b124d-3558-48ad-87f8-503580da5cc7-logs\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drb8b\" (UniqueName: \"kubernetes.io/projected/990b124d-3558-48ad-87f8-503580da5cc7-kube-api-access-drb8b\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.247967 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-config-data\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248002 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-combined-ca-bundle\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248063 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248172 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248198 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/990b124d-3558-48ad-87f8-503580da5cc7-logs\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248221 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clkmv\" (UniqueName: \"kubernetes.io/projected/6974c05c-8d53-4225-8ccd-c8c7c8956073-kube-api-access-clkmv\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248276 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-combined-ca-bundle\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248307 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-logs\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248757 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-logs\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248905 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248953 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.248991 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.251027 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-config\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.251061 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-config-data-custom\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.251707 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-config-data-custom\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.252153 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-config-data\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.252160 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/990b124d-3558-48ad-87f8-503580da5cc7-combined-ca-bundle\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.254532 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-combined-ca-bundle\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.255249 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-config-data\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.265629 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2dxj\" (UniqueName: \"kubernetes.io/projected/a0db6993-f3e7-4aa7-b5cc-1b848a15b56c-kube-api-access-q2dxj\") pod \"barbican-worker-55f6d995c5-hnz4n\" (UID: \"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c\") " pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.266119 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drb8b\" (UniqueName: \"kubernetes.io/projected/990b124d-3558-48ad-87f8-503580da5cc7-kube-api-access-drb8b\") pod \"barbican-keystone-listener-6d78867d94-7lhqs\" (UID: \"990b124d-3558-48ad-87f8-503580da5cc7\") " pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.266480 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clkmv\" (UniqueName: \"kubernetes.io/projected/6974c05c-8d53-4225-8ccd-c8c7c8956073-kube-api-access-clkmv\") pod \"dnsmasq-dns-85ff748b95-29sc9\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.306729 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-wdrmd" event={"ID":"2ec52dbb-ca2f-4013-8536-972042607240","Type":"ContainerDied","Data":"e334d06468b3a37f46d5f6db68268b3881996656b8f3df2be0b3c006d2589a72"} Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.306769 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e334d06468b3a37f46d5f6db68268b3881996656b8f3df2be0b3c006d2589a72" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.306821 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-wdrmd" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.320462 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-jcqjf" event={"ID":"d0cc3be3-7aa7-4384-97ed-1ec7bf75f026","Type":"ContainerDied","Data":"722abc1b9b4878938b1d63e6058f446e8ab4a259fcfed886248ba3ca8f6e13fc"} Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.320498 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="722abc1b9b4878938b1d63e6058f446e8ab4a259fcfed886248ba3ca8f6e13fc" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.320695 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-jcqjf" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.352520 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd86efad-8ad2-4e38-b731-5f892d34a582-logs\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.352592 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krq8t\" (UniqueName: \"kubernetes.io/projected/bd86efad-8ad2-4e38-b731-5f892d34a582-kube-api-access-krq8t\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.352621 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data-custom\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.352675 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.352812 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-combined-ca-bundle\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.372000 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-cftjl"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.373535 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.378843 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.379493 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.379683 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-kqv9d" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.379701 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.379838 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.388478 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.398622 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55f6d995c5-hnz4n" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.418462 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-cftjl"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.450351 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.459412 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-combined-ca-bundle\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.459551 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd86efad-8ad2-4e38-b731-5f892d34a582-logs\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.459595 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krq8t\" (UniqueName: \"kubernetes.io/projected/bd86efad-8ad2-4e38-b731-5f892d34a582-kube-api-access-krq8t\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.459642 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data-custom\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.459694 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.465704 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd86efad-8ad2-4e38-b731-5f892d34a582-logs\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.467551 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.482830 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data-custom\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.494836 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-combined-ca-bundle\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.503197 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krq8t\" (UniqueName: \"kubernetes.io/projected/bd86efad-8ad2-4e38-b731-5f892d34a582-kube-api-access-krq8t\") pod \"barbican-api-75bd7dcff4-tfcmj\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.526802 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.532320 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.542096 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.543134 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.543374 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bqdgs" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.549919 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.555312 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.564018 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-scripts\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.573588 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-config-data\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.573889 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-combined-ca-bundle\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.573957 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84l8p\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-kube-api-access-84l8p\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.573987 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-certs\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.585159 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.624361 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-29sc9"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.653338 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2xw29"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.656628 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676065 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676135 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676164 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxm9g\" (UniqueName: \"kubernetes.io/projected/37da8fa5-9dda-4e98-9a63-a4c0036e0017-kube-api-access-lxm9g\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676204 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-scripts\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676235 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-config-data\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676342 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37da8fa5-9dda-4e98-9a63-a4c0036e0017-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676359 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-scripts\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676383 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-combined-ca-bundle\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676424 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676447 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84l8p\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-kube-api-access-84l8p\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.676470 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-certs\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.682287 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-scripts\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.685940 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-certs\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.690260 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-combined-ca-bundle\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.691520 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-config-data\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.699988 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2xw29"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.706305 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84l8p\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-kube-api-access-84l8p\") pod \"cloudkitty-storageinit-cftjl\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.758495 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.760325 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.766808 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792139 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792192 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37da8fa5-9dda-4e98-9a63-a4c0036e0017-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792221 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-scripts\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792250 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792269 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792339 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792362 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-config\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792392 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792413 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792434 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxm9g\" (UniqueName: \"kubernetes.io/projected/37da8fa5-9dda-4e98-9a63-a4c0036e0017-kube-api-access-lxm9g\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792461 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7z6r\" (UniqueName: \"kubernetes.io/projected/ebaafdbf-7612-40c9-b044-697f41e930e2-kube-api-access-n7z6r\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.792522 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.794967 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37da8fa5-9dda-4e98-9a63-a4c0036e0017-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.805855 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.807816 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.809836 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-scripts\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.811256 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.837952 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxm9g\" (UniqueName: \"kubernetes.io/projected/37da8fa5-9dda-4e98-9a63-a4c0036e0017-kube-api-access-lxm9g\") pod \"cinder-scheduler-0\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.859184 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894418 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894504 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894546 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894583 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data-custom\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894601 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894632 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f172158-bc5a-40a6-afc6-df84970d436d-logs\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894662 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8ndg\" (UniqueName: \"kubernetes.io/projected/9f172158-bc5a-40a6-afc6-df84970d436d-kube-api-access-l8ndg\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.894681 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-config\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.896037 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.896076 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7z6r\" (UniqueName: \"kubernetes.io/projected/ebaafdbf-7612-40c9-b044-697f41e930e2-kube-api-access-n7z6r\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.896100 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.896129 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f172158-bc5a-40a6-afc6-df84970d436d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.896148 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-scripts\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.896512 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.896866 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.897085 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-config\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.897453 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.898328 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.924811 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7z6r\" (UniqueName: \"kubernetes.io/projected/ebaafdbf-7612-40c9-b044-697f41e930e2-kube-api-access-n7z6r\") pod \"dnsmasq-dns-5c9776ccc5-2xw29\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.992852 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997245 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8ndg\" (UniqueName: \"kubernetes.io/projected/9f172158-bc5a-40a6-afc6-df84970d436d-kube-api-access-l8ndg\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997320 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997353 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f172158-bc5a-40a6-afc6-df84970d436d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997368 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-scripts\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997450 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data-custom\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997465 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997493 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f172158-bc5a-40a6-afc6-df84970d436d-logs\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997715 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f172158-bc5a-40a6-afc6-df84970d436d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:15 crc kubenswrapper[4808]: I0217 16:15:15.997893 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f172158-bc5a-40a6-afc6-df84970d436d-logs\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.002100 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.003083 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.010975 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.013196 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-scripts\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.014275 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data-custom\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.015947 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8ndg\" (UniqueName: \"kubernetes.io/projected/9f172158-bc5a-40a6-afc6-df84970d436d-kube-api-access-l8ndg\") pod \"cinder-api-0\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " pod="openstack/cinder-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.045243 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.063287 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.143105 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.224631 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d78867d94-7lhqs"] Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.313336 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55f6d995c5-hnz4n"] Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.330942 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6576669595-nvtln"] Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.331251 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6576669595-nvtln" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-api" containerID="cri-o://811f9cc94c4ee217b19fe631254bddba36393da079ca418fd65bacd8378b729d" gracePeriod=30 Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.331382 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6576669595-nvtln" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-httpd" containerID="cri-o://fee07854741e5a088b7b1dea17a21007719827fd0ce55cfd2c9c99ff36340d84" gracePeriod=30 Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.353873 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c6489dbc7-2ddnw"] Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.355912 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.369731 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" event={"ID":"990b124d-3558-48ad-87f8-503580da5cc7","Type":"ContainerStarted","Data":"31cc8d75c1f4d242197ba91a2b42ad543f364921b9fb333fa6cbb71110597d2b"} Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.373208 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c6489dbc7-2ddnw"] Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.387079 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6576669595-nvtln" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.169:9696/\": EOF" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.414902 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-public-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.417829 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-ovndb-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.418033 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf8l5\" (UniqueName: \"kubernetes.io/projected/b7e54d61-1bf6-41ae-b885-7e6448d351a5-kube-api-access-sf8l5\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.418236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-config\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.418570 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-httpd-config\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.418828 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-combined-ca-bundle\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.418870 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-internal-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.424967 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-29sc9"] Feb 17 16:15:16 crc kubenswrapper[4808]: W0217 16:15:16.428530 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6974c05c_8d53_4225_8ccd_c8c7c8956073.slice/crio-f4d27695837be070b4363e7cb9ae125043b0ce87e34d2269a5ad68632157ac0d WatchSource:0}: Error finding container f4d27695837be070b4363e7cb9ae125043b0ce87e34d2269a5ad68632157ac0d: Status 404 returned error can't find the container with id f4d27695837be070b4363e7cb9ae125043b0ce87e34d2269a5ad68632157ac0d Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.439473 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75bd7dcff4-tfcmj"] Feb 17 16:15:16 crc kubenswrapper[4808]: W0217 16:15:16.462752 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd86efad_8ad2_4e38_b731_5f892d34a582.slice/crio-5dc94be747fd1b78b9a66a8cfe5962566975f11bb39b1a72c4640a142fb1468d WatchSource:0}: Error finding container 5dc94be747fd1b78b9a66a8cfe5962566975f11bb39b1a72c4640a142fb1468d: Status 404 returned error can't find the container with id 5dc94be747fd1b78b9a66a8cfe5962566975f11bb39b1a72c4640a142fb1468d Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.521389 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-ovndb-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.521683 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf8l5\" (UniqueName: \"kubernetes.io/projected/b7e54d61-1bf6-41ae-b885-7e6448d351a5-kube-api-access-sf8l5\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.521803 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-config\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.535001 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-httpd-config\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.535046 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-combined-ca-bundle\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.535099 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-internal-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.535159 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-public-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.527314 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-config\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.526627 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-ovndb-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.539178 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-public-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.540908 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-combined-ca-bundle\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.541640 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf8l5\" (UniqueName: \"kubernetes.io/projected/b7e54d61-1bf6-41ae-b885-7e6448d351a5-kube-api-access-sf8l5\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.542182 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-httpd-config\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.554602 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7e54d61-1bf6-41ae-b885-7e6448d351a5-internal-tls-certs\") pod \"neutron-6c6489dbc7-2ddnw\" (UID: \"b7e54d61-1bf6-41ae-b885-7e6448d351a5\") " pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.580788 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.581274 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.664534 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.699331 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.729672 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.794903 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:16 crc kubenswrapper[4808]: I0217 16:15:16.812075 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-cftjl"] Feb 17 16:15:16 crc kubenswrapper[4808]: W0217 16:15:16.831695 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf7344d6_b8f4_4234_bb75_f4d7702b040b.slice/crio-ad12513f4962dbcb71cd89e1403abeaaad21ab0da490387e800ae06c89c226bc WatchSource:0}: Error finding container ad12513f4962dbcb71cd89e1403abeaaad21ab0da490387e800ae06c89c226bc: Status 404 returned error can't find the container with id ad12513f4962dbcb71cd89e1403abeaaad21ab0da490387e800ae06c89c226bc Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.033625 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.191765 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2xw29"] Feb 17 16:15:17 crc kubenswrapper[4808]: W0217 16:15:17.199082 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebaafdbf_7612_40c9_b044_697f41e930e2.slice/crio-e99cc9a0fa3bce5cde0547a70bbca7ff59974ec820617eba60536a7f6b74d369 WatchSource:0}: Error finding container e99cc9a0fa3bce5cde0547a70bbca7ff59974ec820617eba60536a7f6b74d369: Status 404 returned error can't find the container with id e99cc9a0fa3bce5cde0547a70bbca7ff59974ec820617eba60536a7f6b74d369 Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.395113 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-cftjl" event={"ID":"cf7344d6-b8f4-4234-bb75-f4d7702b040b","Type":"ContainerStarted","Data":"0c5f393313c4812ace12e3dfcc1699bc58edf0ad3bd0769e445698189b780158"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.395396 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-cftjl" event={"ID":"cf7344d6-b8f4-4234-bb75-f4d7702b040b","Type":"ContainerStarted","Data":"ad12513f4962dbcb71cd89e1403abeaaad21ab0da490387e800ae06c89c226bc"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.399798 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37da8fa5-9dda-4e98-9a63-a4c0036e0017","Type":"ContainerStarted","Data":"5ac05208b68a6fcecfd3daeda1e831c1b6b22287e3316af8e4abbf40c7bb9c8b"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.409363 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bd7dcff4-tfcmj" event={"ID":"bd86efad-8ad2-4e38-b731-5f892d34a582","Type":"ContainerStarted","Data":"6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.409448 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bd7dcff4-tfcmj" event={"ID":"bd86efad-8ad2-4e38-b731-5f892d34a582","Type":"ContainerStarted","Data":"8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.409489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bd7dcff4-tfcmj" event={"ID":"bd86efad-8ad2-4e38-b731-5f892d34a582","Type":"ContainerStarted","Data":"5dc94be747fd1b78b9a66a8cfe5962566975f11bb39b1a72c4640a142fb1468d"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.410407 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.410441 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.422166 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-cftjl" podStartSLOduration=2.422146583 podStartE2EDuration="2.422146583s" podCreationTimestamp="2026-02-17 16:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:17.410894768 +0000 UTC m=+1280.927253851" watchObservedRunningTime="2026-02-17 16:15:17.422146583 +0000 UTC m=+1280.938505656" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.440057 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c6489dbc7-2ddnw"] Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.445797 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75bd7dcff4-tfcmj" podStartSLOduration=2.445777634 podStartE2EDuration="2.445777634s" podCreationTimestamp="2026-02-17 16:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:17.427543139 +0000 UTC m=+1280.943902212" watchObservedRunningTime="2026-02-17 16:15:17.445777634 +0000 UTC m=+1280.962136707" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.445850 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9f172158-bc5a-40a6-afc6-df84970d436d","Type":"ContainerStarted","Data":"fcedd92b0b29bbf31af03e2bbced87e666dc9a438c55215268bb770cfadf5c2a"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.451417 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" event={"ID":"ebaafdbf-7612-40c9-b044-697f41e930e2","Type":"ContainerStarted","Data":"e99cc9a0fa3bce5cde0547a70bbca7ff59974ec820617eba60536a7f6b74d369"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.458386 4808 generic.go:334] "Generic (PLEG): container finished" podID="6974c05c-8d53-4225-8ccd-c8c7c8956073" containerID="d99cd647368dafaff2816f4fe6bc8fcc90f0c68e206ab7df9e289310b1ebed6f" exitCode=0 Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.458494 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-29sc9" event={"ID":"6974c05c-8d53-4225-8ccd-c8c7c8956073","Type":"ContainerDied","Data":"d99cd647368dafaff2816f4fe6bc8fcc90f0c68e206ab7df9e289310b1ebed6f"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.458538 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-29sc9" event={"ID":"6974c05c-8d53-4225-8ccd-c8c7c8956073","Type":"ContainerStarted","Data":"f4d27695837be070b4363e7cb9ae125043b0ce87e34d2269a5ad68632157ac0d"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.476178 4808 generic.go:334] "Generic (PLEG): container finished" podID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerID="fee07854741e5a088b7b1dea17a21007719827fd0ce55cfd2c9c99ff36340d84" exitCode=0 Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.476259 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6576669595-nvtln" event={"ID":"dd20b2ca-153a-4f21-9c41-4f00bdc82b56","Type":"ContainerDied","Data":"fee07854741e5a088b7b1dea17a21007719827fd0ce55cfd2c9c99ff36340d84"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.489464 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55f6d995c5-hnz4n" event={"ID":"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c","Type":"ContainerStarted","Data":"3ac1e3efa8e9d62a3f262d3c0293a5072cfc89a70e67782faeb9e36ee9c3e8e5"} Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.489506 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.489616 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.513835 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.513887 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.652828 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.661851 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:17 crc kubenswrapper[4808]: I0217 16:15:17.916926 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.105517 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-nb\") pod \"6974c05c-8d53-4225-8ccd-c8c7c8956073\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.105595 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clkmv\" (UniqueName: \"kubernetes.io/projected/6974c05c-8d53-4225-8ccd-c8c7c8956073-kube-api-access-clkmv\") pod \"6974c05c-8d53-4225-8ccd-c8c7c8956073\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.106171 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-swift-storage-0\") pod \"6974c05c-8d53-4225-8ccd-c8c7c8956073\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.106328 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-svc\") pod \"6974c05c-8d53-4225-8ccd-c8c7c8956073\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.106437 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-config\") pod \"6974c05c-8d53-4225-8ccd-c8c7c8956073\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.106673 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-sb\") pod \"6974c05c-8d53-4225-8ccd-c8c7c8956073\" (UID: \"6974c05c-8d53-4225-8ccd-c8c7c8956073\") " Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.138417 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6974c05c-8d53-4225-8ccd-c8c7c8956073-kube-api-access-clkmv" (OuterVolumeSpecName: "kube-api-access-clkmv") pod "6974c05c-8d53-4225-8ccd-c8c7c8956073" (UID: "6974c05c-8d53-4225-8ccd-c8c7c8956073"). InnerVolumeSpecName "kube-api-access-clkmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.207787 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6974c05c-8d53-4225-8ccd-c8c7c8956073" (UID: "6974c05c-8d53-4225-8ccd-c8c7c8956073"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.210250 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clkmv\" (UniqueName: \"kubernetes.io/projected/6974c05c-8d53-4225-8ccd-c8c7c8956073-kube-api-access-clkmv\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.210298 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.241864 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-config" (OuterVolumeSpecName: "config") pod "6974c05c-8d53-4225-8ccd-c8c7c8956073" (UID: "6974c05c-8d53-4225-8ccd-c8c7c8956073"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.252238 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6974c05c-8d53-4225-8ccd-c8c7c8956073" (UID: "6974c05c-8d53-4225-8ccd-c8c7c8956073"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.314819 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6974c05c-8d53-4225-8ccd-c8c7c8956073" (UID: "6974c05c-8d53-4225-8ccd-c8c7c8956073"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.315020 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6974c05c-8d53-4225-8ccd-c8c7c8956073" (UID: "6974c05c-8d53-4225-8ccd-c8c7c8956073"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.328434 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.328468 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.328479 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.328488 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6974c05c-8d53-4225-8ccd-c8c7c8956073-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.558122 4808 generic.go:334] "Generic (PLEG): container finished" podID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerID="d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd" exitCode=0 Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.558216 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" event={"ID":"ebaafdbf-7612-40c9-b044-697f41e930e2","Type":"ContainerDied","Data":"d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd"} Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.575503 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-29sc9" event={"ID":"6974c05c-8d53-4225-8ccd-c8c7c8956073","Type":"ContainerDied","Data":"f4d27695837be070b4363e7cb9ae125043b0ce87e34d2269a5ad68632157ac0d"} Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.575554 4808 scope.go:117] "RemoveContainer" containerID="d99cd647368dafaff2816f4fe6bc8fcc90f0c68e206ab7df9e289310b1ebed6f" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.575787 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-29sc9" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.589224 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6489dbc7-2ddnw" event={"ID":"b7e54d61-1bf6-41ae-b885-7e6448d351a5","Type":"ContainerStarted","Data":"5fd374d9d6028f00e305deb8758c5c4143b1950a00f15dfa9e62eaede9d208ba"} Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.589268 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6489dbc7-2ddnw" event={"ID":"b7e54d61-1bf6-41ae-b885-7e6448d351a5","Type":"ContainerStarted","Data":"961cf37b2717b91cab861ae741064ca67b4f2bf52c3c18d7423efce877131d78"} Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.611955 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9f172158-bc5a-40a6-afc6-df84970d436d","Type":"ContainerStarted","Data":"35656b2866277a003526143f6d404a3a9c98f5de68552024746c712c1205e4da"} Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.614222 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.614823 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.877004 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-29sc9"] Feb 17 16:15:18 crc kubenswrapper[4808]: I0217 16:15:18.884986 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-29sc9"] Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.047072 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.166814 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6974c05c-8d53-4225-8ccd-c8c7c8956073" path="/var/lib/kubelet/pods/6974c05c-8d53-4225-8ccd-c8c7c8956073/volumes" Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.630122 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37da8fa5-9dda-4e98-9a63-a4c0036e0017","Type":"ContainerStarted","Data":"3e8a06d14230c2f33211006c669f2e9d81553a63563d9c660acf7efbe1266550"} Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.637635 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9f172158-bc5a-40a6-afc6-df84970d436d","Type":"ContainerStarted","Data":"8aa1d22280596defb819b3119564e868d6ec09231fa1d0d6b3bcc085ed8b0dd1"} Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.642543 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c6489dbc7-2ddnw" event={"ID":"b7e54d61-1bf6-41ae-b885-7e6448d351a5","Type":"ContainerStarted","Data":"6f42cce323fc28581406cdc74f3517b723c8ab5654a6663336e6f738e93f94dd"} Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.642664 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.642700 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.642724 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:15:19 crc kubenswrapper[4808]: I0217 16:15:19.663233 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c6489dbc7-2ddnw" podStartSLOduration=3.663215821 podStartE2EDuration="3.663215821s" podCreationTimestamp="2026-02-17 16:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:19.659791188 +0000 UTC m=+1283.176150271" watchObservedRunningTime="2026-02-17 16:15:19.663215821 +0000 UTC m=+1283.179574884" Feb 17 16:15:20 crc kubenswrapper[4808]: I0217 16:15:20.640229 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:15:20 crc kubenswrapper[4808]: I0217 16:15:20.660477 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:15:20 crc kubenswrapper[4808]: I0217 16:15:20.660758 4808 generic.go:334] "Generic (PLEG): container finished" podID="cf7344d6-b8f4-4234-bb75-f4d7702b040b" containerID="0c5f393313c4812ace12e3dfcc1699bc58edf0ad3bd0769e445698189b780158" exitCode=0 Feb 17 16:15:20 crc kubenswrapper[4808]: I0217 16:15:20.660878 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-cftjl" event={"ID":"cf7344d6-b8f4-4234-bb75-f4d7702b040b","Type":"ContainerDied","Data":"0c5f393313c4812ace12e3dfcc1699bc58edf0ad3bd0769e445698189b780158"} Feb 17 16:15:20 crc kubenswrapper[4808]: I0217 16:15:20.661315 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:15:20 crc kubenswrapper[4808]: I0217 16:15:20.661326 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.441235 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.592129 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.592182 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.671913 4808 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.673211 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api-log" containerID="cri-o://35656b2866277a003526143f6d404a3a9c98f5de68552024746c712c1205e4da" gracePeriod=30 Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.673324 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api" containerID="cri-o://8aa1d22280596defb819b3119564e868d6ec09231fa1d0d6b3bcc085ed8b0dd1" gracePeriod=30 Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.687800 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:15:21 crc kubenswrapper[4808]: I0217 16:15:21.704489 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.704466149 podStartE2EDuration="6.704466149s" podCreationTimestamp="2026-02-17 16:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:21.690696806 +0000 UTC m=+1285.207055879" watchObservedRunningTime="2026-02-17 16:15:21.704466149 +0000 UTC m=+1285.220825222" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.039707 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6576669595-nvtln" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.169:9696/\": dial tcp 10.217.0.169:9696: connect: connection refused" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.272659 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5f445fb886-lsqq4"] Feb 17 16:15:22 crc kubenswrapper[4808]: E0217 16:15:22.273137 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6974c05c-8d53-4225-8ccd-c8c7c8956073" containerName="init" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.273167 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6974c05c-8d53-4225-8ccd-c8c7c8956073" containerName="init" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.273494 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6974c05c-8d53-4225-8ccd-c8c7c8956073" containerName="init" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.274836 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.276530 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.276740 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.290841 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f445fb886-lsqq4"] Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.439732 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-config-data\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.439816 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-combined-ca-bundle\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.439845 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9glp\" (UniqueName: \"kubernetes.io/projected/a9bf13d7-3430-4818-b8fc-239796570b6c-kube-api-access-b9glp\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.439872 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-internal-tls-certs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.439927 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9bf13d7-3430-4818-b8fc-239796570b6c-logs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.440008 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-config-data-custom\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.440038 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-public-tls-certs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.541882 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-config-data\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.541946 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-combined-ca-bundle\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.541980 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9glp\" (UniqueName: \"kubernetes.io/projected/a9bf13d7-3430-4818-b8fc-239796570b6c-kube-api-access-b9glp\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.542014 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-internal-tls-certs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.542051 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9bf13d7-3430-4818-b8fc-239796570b6c-logs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.542101 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-config-data-custom\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.542151 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-public-tls-certs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.544362 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9bf13d7-3430-4818-b8fc-239796570b6c-logs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.551893 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-config-data-custom\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.553252 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-public-tls-certs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.553662 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-combined-ca-bundle\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.553919 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-internal-tls-certs\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.557331 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9bf13d7-3430-4818-b8fc-239796570b6c-config-data\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.569125 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9glp\" (UniqueName: \"kubernetes.io/projected/a9bf13d7-3430-4818-b8fc-239796570b6c-kube-api-access-b9glp\") pod \"barbican-api-5f445fb886-lsqq4\" (UID: \"a9bf13d7-3430-4818-b8fc-239796570b6c\") " pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.607563 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.685006 4808 generic.go:334] "Generic (PLEG): container finished" podID="9f172158-bc5a-40a6-afc6-df84970d436d" containerID="8aa1d22280596defb819b3119564e868d6ec09231fa1d0d6b3bcc085ed8b0dd1" exitCode=0 Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.685042 4808 generic.go:334] "Generic (PLEG): container finished" podID="9f172158-bc5a-40a6-afc6-df84970d436d" containerID="35656b2866277a003526143f6d404a3a9c98f5de68552024746c712c1205e4da" exitCode=143 Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.685115 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9f172158-bc5a-40a6-afc6-df84970d436d","Type":"ContainerDied","Data":"8aa1d22280596defb819b3119564e868d6ec09231fa1d0d6b3bcc085ed8b0dd1"} Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.685167 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9f172158-bc5a-40a6-afc6-df84970d436d","Type":"ContainerDied","Data":"35656b2866277a003526143f6d404a3a9c98f5de68552024746c712c1205e4da"} Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.687864 4808 generic.go:334] "Generic (PLEG): container finished" podID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerID="811f9cc94c4ee217b19fe631254bddba36393da079ca418fd65bacd8378b729d" exitCode=0 Feb 17 16:15:22 crc kubenswrapper[4808]: I0217 16:15:22.688678 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6576669595-nvtln" event={"ID":"dd20b2ca-153a-4f21-9c41-4f00bdc82b56","Type":"ContainerDied","Data":"811f9cc94c4ee217b19fe631254bddba36393da079ca418fd65bacd8378b729d"} Feb 17 16:15:26 crc kubenswrapper[4808]: I0217 16:15:26.144352 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 16:15:26 crc kubenswrapper[4808]: I0217 16:15:26.146040 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.182:8776/healthcheck\": dial tcp 10.217.0.182:8776: connect: connection refused" Feb 17 16:15:26 crc kubenswrapper[4808]: I0217 16:15:26.929778 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:26 crc kubenswrapper[4808]: I0217 16:15:26.968822 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.242628 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.266157 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6576669595-nvtln" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347322 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-scripts\") pod \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347619 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-combined-ca-bundle\") pod \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347664 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfzgz\" (UniqueName: \"kubernetes.io/projected/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-kube-api-access-kfzgz\") pod \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347685 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-certs\") pod \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347706 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-config\") pod \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347732 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-config-data\") pod \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347826 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-ovndb-tls-certs\") pod \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347919 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-combined-ca-bundle\") pod \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347967 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-httpd-config\") pod \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.347982 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-public-tls-certs\") pod \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.348044 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84l8p\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-kube-api-access-84l8p\") pod \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\" (UID: \"cf7344d6-b8f4-4234-bb75-f4d7702b040b\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.348088 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-internal-tls-certs\") pod \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\" (UID: \"dd20b2ca-153a-4f21-9c41-4f00bdc82b56\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.367497 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-scripts" (OuterVolumeSpecName: "scripts") pod "cf7344d6-b8f4-4234-bb75-f4d7702b040b" (UID: "cf7344d6-b8f4-4234-bb75-f4d7702b040b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.374584 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-kube-api-access-kfzgz" (OuterVolumeSpecName: "kube-api-access-kfzgz") pod "dd20b2ca-153a-4f21-9c41-4f00bdc82b56" (UID: "dd20b2ca-153a-4f21-9c41-4f00bdc82b56"). InnerVolumeSpecName "kube-api-access-kfzgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.375499 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-kube-api-access-84l8p" (OuterVolumeSpecName: "kube-api-access-84l8p") pod "cf7344d6-b8f4-4234-bb75-f4d7702b040b" (UID: "cf7344d6-b8f4-4234-bb75-f4d7702b040b"). InnerVolumeSpecName "kube-api-access-84l8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.386800 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "dd20b2ca-153a-4f21-9c41-4f00bdc82b56" (UID: "dd20b2ca-153a-4f21-9c41-4f00bdc82b56"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.404305 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-config-data" (OuterVolumeSpecName: "config-data") pod "cf7344d6-b8f4-4234-bb75-f4d7702b040b" (UID: "cf7344d6-b8f4-4234-bb75-f4d7702b040b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.413370 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-certs" (OuterVolumeSpecName: "certs") pod "cf7344d6-b8f4-4234-bb75-f4d7702b040b" (UID: "cf7344d6-b8f4-4234-bb75-f4d7702b040b"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.448333 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf7344d6-b8f4-4234-bb75-f4d7702b040b" (UID: "cf7344d6-b8f4-4234-bb75-f4d7702b040b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.452762 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84l8p\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-kube-api-access-84l8p\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.452887 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.453168 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.465984 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfzgz\" (UniqueName: \"kubernetes.io/projected/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-kube-api-access-kfzgz\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.466154 4808 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cf7344d6-b8f4-4234-bb75-f4d7702b040b-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.466226 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf7344d6-b8f4-4234-bb75-f4d7702b040b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.466302 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.505859 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd20b2ca-153a-4f21-9c41-4f00bdc82b56" (UID: "dd20b2ca-153a-4f21-9c41-4f00bdc82b56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.539719 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-config" (OuterVolumeSpecName: "config") pod "dd20b2ca-153a-4f21-9c41-4f00bdc82b56" (UID: "dd20b2ca-153a-4f21-9c41-4f00bdc82b56"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.545865 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.568654 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.568685 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.613883 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dd20b2ca-153a-4f21-9c41-4f00bdc82b56" (UID: "dd20b2ca-153a-4f21-9c41-4f00bdc82b56"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.636312 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "dd20b2ca-153a-4f21-9c41-4f00bdc82b56" (UID: "dd20b2ca-153a-4f21-9c41-4f00bdc82b56"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.648041 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dd20b2ca-153a-4f21-9c41-4f00bdc82b56" (UID: "dd20b2ca-153a-4f21-9c41-4f00bdc82b56"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.669541 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f172158-bc5a-40a6-afc6-df84970d436d-logs\") pod \"9f172158-bc5a-40a6-afc6-df84970d436d\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.669677 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data-custom\") pod \"9f172158-bc5a-40a6-afc6-df84970d436d\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.669824 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-combined-ca-bundle\") pod \"9f172158-bc5a-40a6-afc6-df84970d436d\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.669850 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data\") pod \"9f172158-bc5a-40a6-afc6-df84970d436d\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.669882 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f172158-bc5a-40a6-afc6-df84970d436d-etc-machine-id\") pod \"9f172158-bc5a-40a6-afc6-df84970d436d\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.669928 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-scripts\") pod \"9f172158-bc5a-40a6-afc6-df84970d436d\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.669963 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8ndg\" (UniqueName: \"kubernetes.io/projected/9f172158-bc5a-40a6-afc6-df84970d436d-kube-api-access-l8ndg\") pod \"9f172158-bc5a-40a6-afc6-df84970d436d\" (UID: \"9f172158-bc5a-40a6-afc6-df84970d436d\") " Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.670431 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.670445 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.670454 4808 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd20b2ca-153a-4f21-9c41-4f00bdc82b56-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.670748 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f172158-bc5a-40a6-afc6-df84970d436d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9f172158-bc5a-40a6-afc6-df84970d436d" (UID: "9f172158-bc5a-40a6-afc6-df84970d436d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.674246 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f172158-bc5a-40a6-afc6-df84970d436d-logs" (OuterVolumeSpecName: "logs") pod "9f172158-bc5a-40a6-afc6-df84970d436d" (UID: "9f172158-bc5a-40a6-afc6-df84970d436d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.678962 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-scripts" (OuterVolumeSpecName: "scripts") pod "9f172158-bc5a-40a6-afc6-df84970d436d" (UID: "9f172158-bc5a-40a6-afc6-df84970d436d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.679377 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f172158-bc5a-40a6-afc6-df84970d436d-kube-api-access-l8ndg" (OuterVolumeSpecName: "kube-api-access-l8ndg") pod "9f172158-bc5a-40a6-afc6-df84970d436d" (UID: "9f172158-bc5a-40a6-afc6-df84970d436d"). InnerVolumeSpecName "kube-api-access-l8ndg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.685751 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9f172158-bc5a-40a6-afc6-df84970d436d" (UID: "9f172158-bc5a-40a6-afc6-df84970d436d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.730678 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f172158-bc5a-40a6-afc6-df84970d436d" (UID: "9f172158-bc5a-40a6-afc6-df84970d436d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.746411 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data" (OuterVolumeSpecName: "config-data") pod "9f172158-bc5a-40a6-afc6-df84970d436d" (UID: "9f172158-bc5a-40a6-afc6-df84970d436d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.771664 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.771687 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.771697 4808 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9f172158-bc5a-40a6-afc6-df84970d436d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.771705 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.771713 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8ndg\" (UniqueName: \"kubernetes.io/projected/9f172158-bc5a-40a6-afc6-df84970d436d-kube-api-access-l8ndg\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.771724 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f172158-bc5a-40a6-afc6-df84970d436d-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.771732 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f172158-bc5a-40a6-afc6-df84970d436d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.772185 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" event={"ID":"ebaafdbf-7612-40c9-b044-697f41e930e2","Type":"ContainerStarted","Data":"593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953"} Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.773903 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.776038 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55f6d995c5-hnz4n" event={"ID":"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c","Type":"ContainerStarted","Data":"3f73cc1f4bde00bd908b4cd2358df0443bc927e5f04373e71d79090a1a91ee61"} Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.779067 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6576669595-nvtln" event={"ID":"dd20b2ca-153a-4f21-9c41-4f00bdc82b56","Type":"ContainerDied","Data":"6a095cda0c57e7c83e37162d0a00993ab0fc7d2ed318b1cd5b24f7f8e6f8ed0d"} Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.779100 4808 scope.go:117] "RemoveContainer" containerID="fee07854741e5a088b7b1dea17a21007719827fd0ce55cfd2c9c99ff36340d84" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.779205 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6576669595-nvtln" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.789977 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-cftjl" event={"ID":"cf7344d6-b8f4-4234-bb75-f4d7702b040b","Type":"ContainerDied","Data":"ad12513f4962dbcb71cd89e1403abeaaad21ab0da490387e800ae06c89c226bc"} Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.790072 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad12513f4962dbcb71cd89e1403abeaaad21ab0da490387e800ae06c89c226bc" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.790140 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-cftjl" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.802436 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" podStartSLOduration=12.802419914 podStartE2EDuration="12.802419914s" podCreationTimestamp="2026-02-17 16:15:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:27.797669685 +0000 UTC m=+1291.314028768" watchObservedRunningTime="2026-02-17 16:15:27.802419914 +0000 UTC m=+1291.318778987" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.804944 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerStarted","Data":"880dacad4a3e154e4d52b5e6d057696d1bf66aa3b76e3929039347494764eb64"} Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.805102 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-central-agent" containerID="cri-o://dab1c654217acba93cbe85ef948ea50d4d0076687aeb53ea5db8956f9dc60a1a" gracePeriod=30 Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.805160 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.805203 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="proxy-httpd" containerID="cri-o://880dacad4a3e154e4d52b5e6d057696d1bf66aa3b76e3929039347494764eb64" gracePeriod=30 Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.805225 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-notification-agent" containerID="cri-o://dd8761ee926d8071fc41da21713fb32d5f439b5455e53db35d9392155b78adbe" gracePeriod=30 Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.805307 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="sg-core" containerID="cri-o://5ae1963ac1b0852c4683f5358c8722c23e5499fa516e84308b0247d589ec8967" gracePeriod=30 Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.817039 4808 scope.go:117] "RemoveContainer" containerID="811f9cc94c4ee217b19fe631254bddba36393da079ca418fd65bacd8378b729d" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.839458 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.842031 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9f172158-bc5a-40a6-afc6-df84970d436d","Type":"ContainerDied","Data":"fcedd92b0b29bbf31af03e2bbced87e666dc9a438c55215268bb770cfadf5c2a"} Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.845310 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" event={"ID":"990b124d-3558-48ad-87f8-503580da5cc7","Type":"ContainerStarted","Data":"811ef05894ae13a541e79c744cd318f4beab6bb8a4ad62bce48c9bb1f1fb9b22"} Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.909183 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.335394349 podStartE2EDuration="1m8.909161564s" podCreationTimestamp="2026-02-17 16:14:19 +0000 UTC" firstStartedPulling="2026-02-17 16:14:20.796322593 +0000 UTC m=+1224.312681666" lastFinishedPulling="2026-02-17 16:15:27.370089808 +0000 UTC m=+1290.886448881" observedRunningTime="2026-02-17 16:15:27.855756518 +0000 UTC m=+1291.372115611" watchObservedRunningTime="2026-02-17 16:15:27.909161564 +0000 UTC m=+1291.425520637" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.945351 4808 scope.go:117] "RemoveContainer" containerID="8aa1d22280596defb819b3119564e868d6ec09231fa1d0d6b3bcc085ed8b0dd1" Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.969376 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f445fb886-lsqq4"] Feb 17 16:15:27 crc kubenswrapper[4808]: I0217 16:15:27.989868 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6576669595-nvtln"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.016711 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6576669595-nvtln"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.027127 4808 scope.go:117] "RemoveContainer" containerID="35656b2866277a003526143f6d404a3a9c98f5de68552024746c712c1205e4da" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.028227 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.038469 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.049709 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: E0217 16:15:28.050142 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-httpd" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050157 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-httpd" Feb 17 16:15:28 crc kubenswrapper[4808]: E0217 16:15:28.050174 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api-log" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050181 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api-log" Feb 17 16:15:28 crc kubenswrapper[4808]: E0217 16:15:28.050207 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-api" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050213 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-api" Feb 17 16:15:28 crc kubenswrapper[4808]: E0217 16:15:28.050227 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050235 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api" Feb 17 16:15:28 crc kubenswrapper[4808]: E0217 16:15:28.050253 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf7344d6-b8f4-4234-bb75-f4d7702b040b" containerName="cloudkitty-storageinit" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050261 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf7344d6-b8f4-4234-bb75-f4d7702b040b" containerName="cloudkitty-storageinit" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050486 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-httpd" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050500 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api-log" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050509 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" containerName="neutron-api" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050518 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf7344d6-b8f4-4234-bb75-f4d7702b040b" containerName="cloudkitty-storageinit" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.050531 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" containerName="cinder-api" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.051806 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.057931 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.057953 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.058642 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.068373 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.214550 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b221adbf-8d08-4f9c-8bb2-578555a453df-logs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.214745 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.214827 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-config-data\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.214920 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b221adbf-8d08-4f9c-8bb2-578555a453df-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.215077 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2skn\" (UniqueName: \"kubernetes.io/projected/b221adbf-8d08-4f9c-8bb2-578555a453df-kube-api-access-s2skn\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.215325 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-config-data-custom\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.215414 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.215517 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-scripts\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.215627 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319390 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-config-data-custom\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319659 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319693 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-scripts\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319772 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319820 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b221adbf-8d08-4f9c-8bb2-578555a453df-logs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319844 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-config-data\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319858 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319877 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b221adbf-8d08-4f9c-8bb2-578555a453df-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.319898 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2skn\" (UniqueName: \"kubernetes.io/projected/b221adbf-8d08-4f9c-8bb2-578555a453df-kube-api-access-s2skn\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.321697 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b221adbf-8d08-4f9c-8bb2-578555a453df-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.329657 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b221adbf-8d08-4f9c-8bb2-578555a453df-logs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.330049 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.346169 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.348077 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-scripts\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.350499 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-config-data\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.351283 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.351899 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b221adbf-8d08-4f9c-8bb2-578555a453df-config-data-custom\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.352174 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2skn\" (UniqueName: \"kubernetes.io/projected/b221adbf-8d08-4f9c-8bb2-578555a453df-kube-api-access-s2skn\") pod \"cinder-api-0\" (UID: \"b221adbf-8d08-4f9c-8bb2-578555a453df\") " pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.409523 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.453370 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.454590 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.461881 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.461902 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-kqv9d" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.461989 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.462082 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.462295 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.488670 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.571018 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2xw29"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.613482 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-786qn"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.615196 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.629020 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.629361 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vhzz\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-kube-api-access-7vhzz\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.629383 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.629410 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-scripts\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.629452 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.629480 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-certs\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.649885 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-786qn"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733685 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vhzz\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-kube-api-access-7vhzz\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733730 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733764 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrdlq\" (UniqueName: \"kubernetes.io/projected/ef386302-14e1-4b00-b816-e85da8d23114-kube-api-access-zrdlq\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733782 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-scripts\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733803 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-svc\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733848 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733877 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-certs\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733896 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733914 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.733971 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.734018 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-config\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.734032 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.745493 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.746400 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-scripts\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.756396 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.765291 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.766990 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.771825 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.773661 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.779689 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vhzz\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-kube-api-access-7vhzz\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.790234 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-certs\") pod \"cloudkitty-proc-0\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.805626 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.817960 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.836746 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrdlq\" (UniqueName: \"kubernetes.io/projected/ef386302-14e1-4b00-b816-e85da8d23114-kube-api-access-zrdlq\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.836790 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-svc\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.836865 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.836949 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.837015 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.837035 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-config\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.838079 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-config\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.839043 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-svc\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.841498 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.841942 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.848274 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.869532 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrdlq\" (UniqueName: \"kubernetes.io/projected/ef386302-14e1-4b00-b816-e85da8d23114-kube-api-access-zrdlq\") pod \"dnsmasq-dns-67bdc55879-786qn\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.876747 4808 generic.go:334] "Generic (PLEG): container finished" podID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerID="880dacad4a3e154e4d52b5e6d057696d1bf66aa3b76e3929039347494764eb64" exitCode=0 Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.876776 4808 generic.go:334] "Generic (PLEG): container finished" podID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerID="5ae1963ac1b0852c4683f5358c8722c23e5499fa516e84308b0247d589ec8967" exitCode=2 Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.876785 4808 generic.go:334] "Generic (PLEG): container finished" podID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerID="dab1c654217acba93cbe85ef948ea50d4d0076687aeb53ea5db8956f9dc60a1a" exitCode=0 Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.876831 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerDied","Data":"880dacad4a3e154e4d52b5e6d057696d1bf66aa3b76e3929039347494764eb64"} Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.876854 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerDied","Data":"5ae1963ac1b0852c4683f5358c8722c23e5499fa516e84308b0247d589ec8967"} Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.876864 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerDied","Data":"dab1c654217acba93cbe85ef948ea50d4d0076687aeb53ea5db8956f9dc60a1a"} Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.882747 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f445fb886-lsqq4" event={"ID":"a9bf13d7-3430-4818-b8fc-239796570b6c","Type":"ContainerStarted","Data":"015c6612d90bd5fc05796bc7fed418ea69aea7bd10e869ca6f1496576d4a26e0"} Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.882774 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f445fb886-lsqq4" event={"ID":"a9bf13d7-3430-4818-b8fc-239796570b6c","Type":"ContainerStarted","Data":"db010c2307a19729c4620396f288f51bb34619fae666877d27a78254eb216149"} Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.883054 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.883106 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.916780 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" event={"ID":"990b124d-3558-48ad-87f8-503580da5cc7","Type":"ContainerStarted","Data":"12cf51cbaaaaa0035e7d43146a9493c075855fc56dd958e42443ac7da0c4910a"} Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.933089 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55f6d995c5-hnz4n" event={"ID":"a0db6993-f3e7-4aa7-b5cc-1b848a15b56c","Type":"ContainerStarted","Data":"5d3a8263da4ef5c89e34853733b82044dda3120c3c78cadd666ba2951bb4612c"} Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.964393 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.972105 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-scripts\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.998303 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:28 crc kubenswrapper[4808]: I0217 16:15:28.999145 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-certs\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.000877 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbp64\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-kube-api-access-gbp64\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.001026 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.001123 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.001280 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a53ca-554f-4be2-a185-3eba97454429-logs\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:28.990757 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37da8fa5-9dda-4e98-9a63-a4c0036e0017","Type":"ContainerStarted","Data":"0299101d44d10b5033809e45bef98b67a9f7bed24aac135e1eb10a2b4058b95e"} Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.015994 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5f445fb886-lsqq4" podStartSLOduration=7.015970201 podStartE2EDuration="7.015970201s" podCreationTimestamp="2026-02-17 16:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:28.917095545 +0000 UTC m=+1292.433454618" watchObservedRunningTime="2026-02-17 16:15:29.015970201 +0000 UTC m=+1292.532329274" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.066009 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6d78867d94-7lhqs" podStartSLOduration=4.178243585 podStartE2EDuration="15.065981976s" podCreationTimestamp="2026-02-17 16:15:14 +0000 UTC" firstStartedPulling="2026-02-17 16:15:16.242707329 +0000 UTC m=+1279.759066402" lastFinishedPulling="2026-02-17 16:15:27.13044572 +0000 UTC m=+1290.646804793" observedRunningTime="2026-02-17 16:15:29.004112241 +0000 UTC m=+1292.520471324" watchObservedRunningTime="2026-02-17 16:15:29.065981976 +0000 UTC m=+1292.582341069" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.093701 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=12.917956252 podStartE2EDuration="14.093674705s" podCreationTimestamp="2026-02-17 16:15:15 +0000 UTC" firstStartedPulling="2026-02-17 16:15:16.833995759 +0000 UTC m=+1280.350354832" lastFinishedPulling="2026-02-17 16:15:18.009714212 +0000 UTC m=+1281.526073285" observedRunningTime="2026-02-17 16:15:29.043351443 +0000 UTC m=+1292.559710516" watchObservedRunningTime="2026-02-17 16:15:29.093674705 +0000 UTC m=+1292.610033788" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.101250 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-55f6d995c5-hnz4n" podStartSLOduration=4.15260695 podStartE2EDuration="15.101227829s" podCreationTimestamp="2026-02-17 16:15:14 +0000 UTC" firstStartedPulling="2026-02-17 16:15:16.381259141 +0000 UTC m=+1279.897618214" lastFinishedPulling="2026-02-17 16:15:27.32988002 +0000 UTC m=+1290.846239093" observedRunningTime="2026-02-17 16:15:29.058436781 +0000 UTC m=+1292.574795874" watchObservedRunningTime="2026-02-17 16:15:29.101227829 +0000 UTC m=+1292.617586912" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.110171 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.110231 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.110300 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a53ca-554f-4be2-a185-3eba97454429-logs\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.110362 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-scripts\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.110396 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.110506 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-certs\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.110530 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbp64\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-kube-api-access-gbp64\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.117185 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a53ca-554f-4be2-a185-3eba97454429-logs\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.119193 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.133094 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.138090 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-scripts\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.142094 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-certs\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.145122 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbp64\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-kube-api-access-gbp64\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.160184 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.177149 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f172158-bc5a-40a6-afc6-df84970d436d" path="/var/lib/kubelet/pods/9f172158-bc5a-40a6-afc6-df84970d436d/volumes" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.178045 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd20b2ca-153a-4f21-9c41-4f00bdc82b56" path="/var/lib/kubelet/pods/dd20b2ca-153a-4f21-9c41-4f00bdc82b56/volumes" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.247481 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.430392 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.566653 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:29 crc kubenswrapper[4808]: I0217 16:15:29.842080 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-786qn"] Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.016958 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"23a1fa53-e668-4800-b54a-904f42d9eb5e","Type":"ContainerStarted","Data":"d486a3a307b0de09a60edde55636666b3342a5903cc110cae3e17e9502f50af9"} Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.021271 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-786qn" event={"ID":"ef386302-14e1-4b00-b816-e85da8d23114","Type":"ContainerStarted","Data":"d83fa5a20f760435e6a158fc895b5bd4256f47d348c4b60bfa4934c4b8383f1a"} Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.024899 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b221adbf-8d08-4f9c-8bb2-578555a453df","Type":"ContainerStarted","Data":"d8c64ebcef65f5baba79f233ba06426dadfbe0680217c995d73865efa0d666fb"} Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.027052 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f445fb886-lsqq4" event={"ID":"a9bf13d7-3430-4818-b8fc-239796570b6c","Type":"ContainerStarted","Data":"ca295969bb0e5c39df0b90c6d6227d025c5b6e39a664f6e1537222ae6832dd6c"} Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.027819 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" podUID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerName="dnsmasq-dns" containerID="cri-o://593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953" gracePeriod=10 Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.067856 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.598725 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.666296 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7z6r\" (UniqueName: \"kubernetes.io/projected/ebaafdbf-7612-40c9-b044-697f41e930e2-kube-api-access-n7z6r\") pod \"ebaafdbf-7612-40c9-b044-697f41e930e2\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.666699 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-swift-storage-0\") pod \"ebaafdbf-7612-40c9-b044-697f41e930e2\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.666756 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-config\") pod \"ebaafdbf-7612-40c9-b044-697f41e930e2\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.666831 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-sb\") pod \"ebaafdbf-7612-40c9-b044-697f41e930e2\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.666850 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-nb\") pod \"ebaafdbf-7612-40c9-b044-697f41e930e2\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.666883 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-svc\") pod \"ebaafdbf-7612-40c9-b044-697f41e930e2\" (UID: \"ebaafdbf-7612-40c9-b044-697f41e930e2\") " Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.672797 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebaafdbf-7612-40c9-b044-697f41e930e2-kube-api-access-n7z6r" (OuterVolumeSpecName: "kube-api-access-n7z6r") pod "ebaafdbf-7612-40c9-b044-697f41e930e2" (UID: "ebaafdbf-7612-40c9-b044-697f41e930e2"). InnerVolumeSpecName "kube-api-access-n7z6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.750675 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ebaafdbf-7612-40c9-b044-697f41e930e2" (UID: "ebaafdbf-7612-40c9-b044-697f41e930e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.765094 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ebaafdbf-7612-40c9-b044-697f41e930e2" (UID: "ebaafdbf-7612-40c9-b044-697f41e930e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.769966 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.769997 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7z6r\" (UniqueName: \"kubernetes.io/projected/ebaafdbf-7612-40c9-b044-697f41e930e2-kube-api-access-n7z6r\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.770011 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.772043 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ebaafdbf-7612-40c9-b044-697f41e930e2" (UID: "ebaafdbf-7612-40c9-b044-697f41e930e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.780717 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ebaafdbf-7612-40c9-b044-697f41e930e2" (UID: "ebaafdbf-7612-40c9-b044-697f41e930e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.791665 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-config" (OuterVolumeSpecName: "config") pod "ebaafdbf-7612-40c9-b044-697f41e930e2" (UID: "ebaafdbf-7612-40c9-b044-697f41e930e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.872017 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.872041 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:30 crc kubenswrapper[4808]: I0217 16:15:30.872052 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebaafdbf-7612-40c9-b044-697f41e930e2-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.011749 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.043072 4808 generic.go:334] "Generic (PLEG): container finished" podID="ef386302-14e1-4b00-b816-e85da8d23114" containerID="76cc030230faf69f3923cb1665482598e8d9c392060ca1c1353369b5c8628b5a" exitCode=0 Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.043123 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-786qn" event={"ID":"ef386302-14e1-4b00-b816-e85da8d23114","Type":"ContainerDied","Data":"76cc030230faf69f3923cb1665482598e8d9c392060ca1c1353369b5c8628b5a"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.046019 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb0a53ca-554f-4be2-a185-3eba97454429","Type":"ContainerStarted","Data":"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.046057 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb0a53ca-554f-4be2-a185-3eba97454429","Type":"ContainerStarted","Data":"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.046067 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb0a53ca-554f-4be2-a185-3eba97454429","Type":"ContainerStarted","Data":"643be3a025f081600c92f8d5d11a7801aaad867291685319f6312aa567fb9d6a"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.046651 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.048711 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b221adbf-8d08-4f9c-8bb2-578555a453df","Type":"ContainerStarted","Data":"aa8228c5daf85af14f81736842275b7f307863cb24e1467c7a4c23f8458865ca"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.048734 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b221adbf-8d08-4f9c-8bb2-578555a453df","Type":"ContainerStarted","Data":"a26d0c09826de2ec55266756d360518d92b4685278c12c05abb29f8474277c36"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.049239 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.053310 4808 generic.go:334] "Generic (PLEG): container finished" podID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerID="593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953" exitCode=0 Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.053939 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.057812 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" event={"ID":"ebaafdbf-7612-40c9-b044-697f41e930e2","Type":"ContainerDied","Data":"593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.058291 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2xw29" event={"ID":"ebaafdbf-7612-40c9-b044-697f41e930e2","Type":"ContainerDied","Data":"e99cc9a0fa3bce5cde0547a70bbca7ff59974ec820617eba60536a7f6b74d369"} Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.058354 4808 scope.go:117] "RemoveContainer" containerID="593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.112890 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=3.112864165 podStartE2EDuration="3.112864165s" podCreationTimestamp="2026-02-17 16:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:31.086652646 +0000 UTC m=+1294.603011719" watchObservedRunningTime="2026-02-17 16:15:31.112864165 +0000 UTC m=+1294.629223248" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.114807 4808 scope.go:117] "RemoveContainer" containerID="d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.140972 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.140949395 podStartE2EDuration="4.140949395s" podCreationTimestamp="2026-02-17 16:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:31.115018224 +0000 UTC m=+1294.631377297" watchObservedRunningTime="2026-02-17 16:15:31.140949395 +0000 UTC m=+1294.657308468" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.172523 4808 scope.go:117] "RemoveContainer" containerID="593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953" Feb 17 16:15:31 crc kubenswrapper[4808]: E0217 16:15:31.174485 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953\": container with ID starting with 593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953 not found: ID does not exist" containerID="593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.174540 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953"} err="failed to get container status \"593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953\": rpc error: code = NotFound desc = could not find container \"593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953\": container with ID starting with 593b85e7ed11967846ba3f0a308af29ad73243d26b49fd486a4676c69dbd2953 not found: ID does not exist" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.174590 4808 scope.go:117] "RemoveContainer" containerID="d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd" Feb 17 16:15:31 crc kubenswrapper[4808]: E0217 16:15:31.180430 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd\": container with ID starting with d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd not found: ID does not exist" containerID="d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.180477 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd"} err="failed to get container status \"d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd\": rpc error: code = NotFound desc = could not find container \"d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd\": container with ID starting with d7d5b1aacc9ee39478911942c54b18b463b829b4e46aa33564c91e96616177dd not found: ID does not exist" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.227483 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2xw29"] Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.270339 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2xw29"] Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.369618 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 16:15:31 crc kubenswrapper[4808]: I0217 16:15:31.622734 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:32 crc kubenswrapper[4808]: I0217 16:15:32.064908 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"23a1fa53-e668-4800-b54a-904f42d9eb5e","Type":"ContainerStarted","Data":"50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94"} Feb 17 16:15:32 crc kubenswrapper[4808]: I0217 16:15:32.066833 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-786qn" event={"ID":"ef386302-14e1-4b00-b816-e85da8d23114","Type":"ContainerStarted","Data":"893c1ea963c8e724fa2b9baa335921cef2a62410cb7f634726388e519c6b4a53"} Feb 17 16:15:32 crc kubenswrapper[4808]: I0217 16:15:32.067638 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:32 crc kubenswrapper[4808]: I0217 16:15:32.092511 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.509988524 podStartE2EDuration="4.092490281s" podCreationTimestamp="2026-02-17 16:15:28 +0000 UTC" firstStartedPulling="2026-02-17 16:15:29.589977453 +0000 UTC m=+1293.106336516" lastFinishedPulling="2026-02-17 16:15:31.1724792 +0000 UTC m=+1294.688838273" observedRunningTime="2026-02-17 16:15:32.085122812 +0000 UTC m=+1295.601481895" watchObservedRunningTime="2026-02-17 16:15:32.092490281 +0000 UTC m=+1295.608849354" Feb 17 16:15:32 crc kubenswrapper[4808]: I0217 16:15:32.114657 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:32 crc kubenswrapper[4808]: I0217 16:15:32.120049 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67bdc55879-786qn" podStartSLOduration=4.120033386 podStartE2EDuration="4.120033386s" podCreationTimestamp="2026-02-17 16:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:32.118111484 +0000 UTC m=+1295.634470567" watchObservedRunningTime="2026-02-17 16:15:32.120033386 +0000 UTC m=+1295.636392459" Feb 17 16:15:32 crc kubenswrapper[4808]: I0217 16:15:32.185282 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.101658 4808 generic.go:334] "Generic (PLEG): container finished" podID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerID="dd8761ee926d8071fc41da21713fb32d5f439b5455e53db35d9392155b78adbe" exitCode=0 Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.101713 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerDied","Data":"dd8761ee926d8071fc41da21713fb32d5f439b5455e53db35d9392155b78adbe"} Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.102261 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="cinder-scheduler" containerID="cri-o://3e8a06d14230c2f33211006c669f2e9d81553a63563d9c660acf7efbe1266550" gracePeriod=30 Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.102430 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="probe" containerID="cri-o://0299101d44d10b5033809e45bef98b67a9f7bed24aac135e1eb10a2b4058b95e" gracePeriod=30 Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.102729 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api-log" containerID="cri-o://ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2" gracePeriod=30 Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.103283 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api" containerID="cri-o://0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad" gracePeriod=30 Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.168238 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebaafdbf-7612-40c9-b044-697f41e930e2" path="/var/lib/kubelet/pods/ebaafdbf-7612-40c9-b044-697f41e930e2/volumes" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.636424 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.797273 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-config-data\") pod \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.797599 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-run-httpd\") pod \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.797708 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-scripts\") pod \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.797744 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5gdz\" (UniqueName: \"kubernetes.io/projected/ce9fba55-1b70-4d39-a052-bff96bd8e93a-kube-api-access-j5gdz\") pod \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.797886 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-log-httpd\") pod \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.797926 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-combined-ca-bundle\") pod \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.797996 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-sg-core-conf-yaml\") pod \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\" (UID: \"ce9fba55-1b70-4d39-a052-bff96bd8e93a\") " Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.799253 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ce9fba55-1b70-4d39-a052-bff96bd8e93a" (UID: "ce9fba55-1b70-4d39-a052-bff96bd8e93a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.799407 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ce9fba55-1b70-4d39-a052-bff96bd8e93a" (UID: "ce9fba55-1b70-4d39-a052-bff96bd8e93a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.808825 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9fba55-1b70-4d39-a052-bff96bd8e93a-kube-api-access-j5gdz" (OuterVolumeSpecName: "kube-api-access-j5gdz") pod "ce9fba55-1b70-4d39-a052-bff96bd8e93a" (UID: "ce9fba55-1b70-4d39-a052-bff96bd8e93a"). InnerVolumeSpecName "kube-api-access-j5gdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.855026 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-scripts" (OuterVolumeSpecName: "scripts") pod "ce9fba55-1b70-4d39-a052-bff96bd8e93a" (UID: "ce9fba55-1b70-4d39-a052-bff96bd8e93a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.901137 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.901185 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce9fba55-1b70-4d39-a052-bff96bd8e93a-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.901196 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.901205 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5gdz\" (UniqueName: \"kubernetes.io/projected/ce9fba55-1b70-4d39-a052-bff96bd8e93a-kube-api-access-j5gdz\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.912303 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce9fba55-1b70-4d39-a052-bff96bd8e93a" (UID: "ce9fba55-1b70-4d39-a052-bff96bd8e93a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.930706 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ce9fba55-1b70-4d39-a052-bff96bd8e93a" (UID: "ce9fba55-1b70-4d39-a052-bff96bd8e93a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:33 crc kubenswrapper[4808]: I0217 16:15:33.967157 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-config-data" (OuterVolumeSpecName: "config-data") pod "ce9fba55-1b70-4d39-a052-bff96bd8e93a" (UID: "ce9fba55-1b70-4d39-a052-bff96bd8e93a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.003024 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.003066 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.003080 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce9fba55-1b70-4d39-a052-bff96bd8e93a-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.054250 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.114397 4808 generic.go:334] "Generic (PLEG): container finished" podID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerID="0299101d44d10b5033809e45bef98b67a9f7bed24aac135e1eb10a2b4058b95e" exitCode=0 Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.114466 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37da8fa5-9dda-4e98-9a63-a4c0036e0017","Type":"ContainerDied","Data":"0299101d44d10b5033809e45bef98b67a9f7bed24aac135e1eb10a2b4058b95e"} Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.117641 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.117631 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce9fba55-1b70-4d39-a052-bff96bd8e93a","Type":"ContainerDied","Data":"722643afae2a4e200c6ad3b18d935dcb7ed1baa99b37d21d611a112237864c00"} Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.117902 4808 scope.go:117] "RemoveContainer" containerID="880dacad4a3e154e4d52b5e6d057696d1bf66aa3b76e3929039347494764eb64" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.119833 4808 generic.go:334] "Generic (PLEG): container finished" podID="bb0a53ca-554f-4be2-a185-3eba97454429" containerID="0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad" exitCode=0 Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.119853 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.119880 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb0a53ca-554f-4be2-a185-3eba97454429","Type":"ContainerDied","Data":"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad"} Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.119917 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb0a53ca-554f-4be2-a185-3eba97454429","Type":"ContainerDied","Data":"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2"} Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.119860 4808 generic.go:334] "Generic (PLEG): container finished" podID="bb0a53ca-554f-4be2-a185-3eba97454429" containerID="ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2" exitCode=143 Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.120016 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"bb0a53ca-554f-4be2-a185-3eba97454429","Type":"ContainerDied","Data":"643be3a025f081600c92f8d5d11a7801aaad867291685319f6312aa567fb9d6a"} Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.120117 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="23a1fa53-e668-4800-b54a-904f42d9eb5e" containerName="cloudkitty-proc" containerID="cri-o://50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94" gracePeriod=30 Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.141372 4808 scope.go:117] "RemoveContainer" containerID="5ae1963ac1b0852c4683f5358c8722c23e5499fa516e84308b0247d589ec8967" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.171867 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.182798 4808 scope.go:117] "RemoveContainer" containerID="dd8761ee926d8071fc41da21713fb32d5f439b5455e53db35d9392155b78adbe" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.213482 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-scripts\") pod \"bb0a53ca-554f-4be2-a185-3eba97454429\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.213717 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-certs\") pod \"bb0a53ca-554f-4be2-a185-3eba97454429\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.213770 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-combined-ca-bundle\") pod \"bb0a53ca-554f-4be2-a185-3eba97454429\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.213816 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data-custom\") pod \"bb0a53ca-554f-4be2-a185-3eba97454429\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.213859 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbp64\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-kube-api-access-gbp64\") pod \"bb0a53ca-554f-4be2-a185-3eba97454429\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.213889 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data\") pod \"bb0a53ca-554f-4be2-a185-3eba97454429\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.213983 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a53ca-554f-4be2-a185-3eba97454429-logs\") pod \"bb0a53ca-554f-4be2-a185-3eba97454429\" (UID: \"bb0a53ca-554f-4be2-a185-3eba97454429\") " Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.217624 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0a53ca-554f-4be2-a185-3eba97454429-logs" (OuterVolumeSpecName: "logs") pod "bb0a53ca-554f-4be2-a185-3eba97454429" (UID: "bb0a53ca-554f-4be2-a185-3eba97454429"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.218954 4808 scope.go:117] "RemoveContainer" containerID="dab1c654217acba93cbe85ef948ea50d4d0076687aeb53ea5db8956f9dc60a1a" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.222344 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-scripts" (OuterVolumeSpecName: "scripts") pod "bb0a53ca-554f-4be2-a185-3eba97454429" (UID: "bb0a53ca-554f-4be2-a185-3eba97454429"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.222856 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-kube-api-access-gbp64" (OuterVolumeSpecName: "kube-api-access-gbp64") pod "bb0a53ca-554f-4be2-a185-3eba97454429" (UID: "bb0a53ca-554f-4be2-a185-3eba97454429"). InnerVolumeSpecName "kube-api-access-gbp64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.223343 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-certs" (OuterVolumeSpecName: "certs") pod "bb0a53ca-554f-4be2-a185-3eba97454429" (UID: "bb0a53ca-554f-4be2-a185-3eba97454429"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.224810 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.242441 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243442 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-central-agent" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243457 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-central-agent" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243473 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243479 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243489 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerName="init" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243495 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerName="init" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243512 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api-log" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243518 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api-log" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243528 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="proxy-httpd" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243533 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="proxy-httpd" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243548 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerName="dnsmasq-dns" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243554 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerName="dnsmasq-dns" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243583 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="sg-core" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243590 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="sg-core" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.243603 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-notification-agent" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.243609 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-notification-agent" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.244283 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="sg-core" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.244301 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-notification-agent" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.244312 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.244320 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="proxy-httpd" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.244330 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" containerName="cloudkitty-api-log" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.244342 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebaafdbf-7612-40c9-b044-697f41e930e2" containerName="dnsmasq-dns" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.244354 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" containerName="ceilometer-central-agent" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.246534 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bb0a53ca-554f-4be2-a185-3eba97454429" (UID: "bb0a53ca-554f-4be2-a185-3eba97454429"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.246925 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.248403 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb0a53ca-554f-4be2-a185-3eba97454429" (UID: "bb0a53ca-554f-4be2-a185-3eba97454429"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.251814 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.252422 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.257689 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.267336 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data" (OuterVolumeSpecName: "config-data") pod "bb0a53ca-554f-4be2-a185-3eba97454429" (UID: "bb0a53ca-554f-4be2-a185-3eba97454429"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.315523 4808 scope.go:117] "RemoveContainer" containerID="0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.317494 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbp64\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-kube-api-access-gbp64\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.317531 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.317547 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a53ca-554f-4be2-a185-3eba97454429-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.317559 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.317587 4808 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bb0a53ca-554f-4be2-a185-3eba97454429-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.317601 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.317613 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0a53ca-554f-4be2-a185-3eba97454429-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.344202 4808 scope.go:117] "RemoveContainer" containerID="ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.359667 4808 scope.go:117] "RemoveContainer" containerID="0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.360174 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad\": container with ID starting with 0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad not found: ID does not exist" containerID="0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.360226 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad"} err="failed to get container status \"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad\": rpc error: code = NotFound desc = could not find container \"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad\": container with ID starting with 0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad not found: ID does not exist" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.360257 4808 scope.go:117] "RemoveContainer" containerID="ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2" Feb 17 16:15:34 crc kubenswrapper[4808]: E0217 16:15:34.360682 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2\": container with ID starting with ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2 not found: ID does not exist" containerID="ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.360742 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2"} err="failed to get container status \"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2\": rpc error: code = NotFound desc = could not find container \"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2\": container with ID starting with ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2 not found: ID does not exist" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.360782 4808 scope.go:117] "RemoveContainer" containerID="0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.361089 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad"} err="failed to get container status \"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad\": rpc error: code = NotFound desc = could not find container \"0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad\": container with ID starting with 0778140cec010c1252604b91cd534db0da28521dd85bdc49c1940e48ff51c5ad not found: ID does not exist" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.361120 4808 scope.go:117] "RemoveContainer" containerID="ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.361476 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2"} err="failed to get container status \"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2\": rpc error: code = NotFound desc = could not find container \"ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2\": container with ID starting with ff8c13248ed3bc6b83102bff59c9c6021e22f8698b1b6f41e54decc4c38650d2 not found: ID does not exist" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.448419 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-config-data\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.448502 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-scripts\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.448545 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-run-httpd\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.448630 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcrg4\" (UniqueName: \"kubernetes.io/projected/ade95199-c613-4920-aa24-6cedde28dda6-kube-api-access-rcrg4\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.448674 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-log-httpd\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.448817 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.448838 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.484219 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.495746 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.507963 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.509756 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.514213 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.514443 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.514557 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.522089 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.550264 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-log-httpd\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.550387 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.550409 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.550441 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-config-data\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.550463 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-scripts\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.550487 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-run-httpd\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.550523 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcrg4\" (UniqueName: \"kubernetes.io/projected/ade95199-c613-4920-aa24-6cedde28dda6-kube-api-access-rcrg4\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.552218 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-run-httpd\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.552505 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-log-httpd\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.555524 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.556086 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-scripts\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.556101 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-config-data\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.557143 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.566652 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcrg4\" (UniqueName: \"kubernetes.io/projected/ade95199-c613-4920-aa24-6cedde28dda6-kube-api-access-rcrg4\") pod \"ceilometer-0\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.606956 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652528 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-config-data\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652663 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ktxk\" (UniqueName: \"kubernetes.io/projected/b35dce7b-8ffe-4981-8376-5db5a01dcf77-kube-api-access-4ktxk\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652722 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b35dce7b-8ffe-4981-8376-5db5a01dcf77-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652757 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652798 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b35dce7b-8ffe-4981-8376-5db5a01dcf77-logs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652825 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652879 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652910 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-scripts\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.652932 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754015 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-config-data\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754118 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ktxk\" (UniqueName: \"kubernetes.io/projected/b35dce7b-8ffe-4981-8376-5db5a01dcf77-kube-api-access-4ktxk\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754163 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b35dce7b-8ffe-4981-8376-5db5a01dcf77-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754188 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754229 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b35dce7b-8ffe-4981-8376-5db5a01dcf77-logs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754257 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754311 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754345 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-scripts\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.754366 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.755201 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b35dce7b-8ffe-4981-8376-5db5a01dcf77-logs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.759055 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.759533 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.759791 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-config-data\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.760468 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-scripts\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.760682 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.762549 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/b35dce7b-8ffe-4981-8376-5db5a01dcf77-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.763338 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b35dce7b-8ffe-4981-8376-5db5a01dcf77-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.773798 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ktxk\" (UniqueName: \"kubernetes.io/projected/b35dce7b-8ffe-4981-8376-5db5a01dcf77-kube-api-access-4ktxk\") pod \"cloudkitty-api-0\" (UID: \"b35dce7b-8ffe-4981-8376-5db5a01dcf77\") " pod="openstack/cloudkitty-api-0" Feb 17 16:15:34 crc kubenswrapper[4808]: I0217 16:15:34.834487 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 17 16:15:35 crc kubenswrapper[4808]: W0217 16:15:35.067933 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podade95199_c613_4920_aa24_6cedde28dda6.slice/crio-356af2c8c1b6e4c7feb3f6d92a6b8bd00153587c6186bbe593c45d6ad9a2caaf WatchSource:0}: Error finding container 356af2c8c1b6e4c7feb3f6d92a6b8bd00153587c6186bbe593c45d6ad9a2caaf: Status 404 returned error can't find the container with id 356af2c8c1b6e4c7feb3f6d92a6b8bd00153587c6186bbe593c45d6ad9a2caaf Feb 17 16:15:35 crc kubenswrapper[4808]: I0217 16:15:35.068809 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:35 crc kubenswrapper[4808]: I0217 16:15:35.128768 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerStarted","Data":"356af2c8c1b6e4c7feb3f6d92a6b8bd00153587c6186bbe593c45d6ad9a2caaf"} Feb 17 16:15:35 crc kubenswrapper[4808]: I0217 16:15:35.155333 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0a53ca-554f-4be2-a185-3eba97454429" path="/var/lib/kubelet/pods/bb0a53ca-554f-4be2-a185-3eba97454429/volumes" Feb 17 16:15:35 crc kubenswrapper[4808]: I0217 16:15:35.156136 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9fba55-1b70-4d39-a052-bff96bd8e93a" path="/var/lib/kubelet/pods/ce9fba55-1b70-4d39-a052-bff96bd8e93a/volumes" Feb 17 16:15:35 crc kubenswrapper[4808]: I0217 16:15:35.299141 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 17 16:15:35 crc kubenswrapper[4808]: W0217 16:15:35.301964 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb35dce7b_8ffe_4981_8376_5db5a01dcf77.slice/crio-5aeee06cb2f420158a429d9e611bf17f623eb19a5c52d34b3b5288c68b008efd WatchSource:0}: Error finding container 5aeee06cb2f420158a429d9e611bf17f623eb19a5c52d34b3b5288c68b008efd: Status 404 returned error can't find the container with id 5aeee06cb2f420158a429d9e611bf17f623eb19a5c52d34b3b5288c68b008efd Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.148999 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerStarted","Data":"7026f52ab348147acdc0cc1845b030fe4c38003a827c4074efe539c2c13f73e8"} Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.152527 4808 generic.go:334] "Generic (PLEG): container finished" podID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerID="3e8a06d14230c2f33211006c669f2e9d81553a63563d9c660acf7efbe1266550" exitCode=0 Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.152743 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37da8fa5-9dda-4e98-9a63-a4c0036e0017","Type":"ContainerDied","Data":"3e8a06d14230c2f33211006c669f2e9d81553a63563d9c660acf7efbe1266550"} Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.155016 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b35dce7b-8ffe-4981-8376-5db5a01dcf77","Type":"ContainerStarted","Data":"435e7e168730fdbe635d838267298718859477108e0d4b40fcac3b5ef64e0fd4"} Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.155154 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b35dce7b-8ffe-4981-8376-5db5a01dcf77","Type":"ContainerStarted","Data":"35d8865441ee3117fccc57fcafb8ffc8b54527867783545174534182b937dbb1"} Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.155255 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"b35dce7b-8ffe-4981-8376-5db5a01dcf77","Type":"ContainerStarted","Data":"5aeee06cb2f420158a429d9e611bf17f623eb19a5c52d34b3b5288c68b008efd"} Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.155376 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.225211 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.225190303 podStartE2EDuration="2.225190303s" podCreationTimestamp="2026-02-17 16:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:36.211924825 +0000 UTC m=+1299.728283928" watchObservedRunningTime="2026-02-17 16:15:36.225190303 +0000 UTC m=+1299.741549396" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.416536 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.504675 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxm9g\" (UniqueName: \"kubernetes.io/projected/37da8fa5-9dda-4e98-9a63-a4c0036e0017-kube-api-access-lxm9g\") pod \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.506120 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-scripts\") pod \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.506166 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-combined-ca-bundle\") pod \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.506199 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data-custom\") pod \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.506229 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data\") pod \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.506255 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37da8fa5-9dda-4e98-9a63-a4c0036e0017-etc-machine-id\") pod \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\" (UID: \"37da8fa5-9dda-4e98-9a63-a4c0036e0017\") " Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.506784 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37da8fa5-9dda-4e98-9a63-a4c0036e0017-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "37da8fa5-9dda-4e98-9a63-a4c0036e0017" (UID: "37da8fa5-9dda-4e98-9a63-a4c0036e0017"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.508101 4808 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/37da8fa5-9dda-4e98-9a63-a4c0036e0017-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.515510 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "37da8fa5-9dda-4e98-9a63-a4c0036e0017" (UID: "37da8fa5-9dda-4e98-9a63-a4c0036e0017"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.518500 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37da8fa5-9dda-4e98-9a63-a4c0036e0017-kube-api-access-lxm9g" (OuterVolumeSpecName: "kube-api-access-lxm9g") pod "37da8fa5-9dda-4e98-9a63-a4c0036e0017" (UID: "37da8fa5-9dda-4e98-9a63-a4c0036e0017"). InnerVolumeSpecName "kube-api-access-lxm9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.519504 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-scripts" (OuterVolumeSpecName: "scripts") pod "37da8fa5-9dda-4e98-9a63-a4c0036e0017" (UID: "37da8fa5-9dda-4e98-9a63-a4c0036e0017"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.597428 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37da8fa5-9dda-4e98-9a63-a4c0036e0017" (UID: "37da8fa5-9dda-4e98-9a63-a4c0036e0017"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.611861 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.611893 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.611902 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.611911 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxm9g\" (UniqueName: \"kubernetes.io/projected/37da8fa5-9dda-4e98-9a63-a4c0036e0017-kube-api-access-lxm9g\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.647484 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data" (OuterVolumeSpecName: "config-data") pod "37da8fa5-9dda-4e98-9a63-a4c0036e0017" (UID: "37da8fa5-9dda-4e98-9a63-a4c0036e0017"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:36 crc kubenswrapper[4808]: I0217 16:15:36.713146 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37da8fa5-9dda-4e98-9a63-a4c0036e0017-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.178305 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerStarted","Data":"1475151fb2b9ec40ea170157633c4ee253f1d8d7d5da164ebda9104b80ecbb68"} Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.190537 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"37da8fa5-9dda-4e98-9a63-a4c0036e0017","Type":"ContainerDied","Data":"5ac05208b68a6fcecfd3daeda1e831c1b6b22287e3316af8e4abbf40c7bb9c8b"} Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.190617 4808 scope.go:117] "RemoveContainer" containerID="0299101d44d10b5033809e45bef98b67a9f7bed24aac135e1eb10a2b4058b95e" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.190841 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.234024 4808 scope.go:117] "RemoveContainer" containerID="3e8a06d14230c2f33211006c669f2e9d81553a63563d9c660acf7efbe1266550" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.249636 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.265187 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.289827 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:37 crc kubenswrapper[4808]: E0217 16:15:37.290938 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="cinder-scheduler" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.290963 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="cinder-scheduler" Feb 17 16:15:37 crc kubenswrapper[4808]: E0217 16:15:37.290981 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="probe" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.290990 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="probe" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.291219 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="probe" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.291299 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" containerName="cinder-scheduler" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.292905 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.300951 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.320248 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.439649 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.439998 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdttm\" (UniqueName: \"kubernetes.io/projected/fce98890-1299-4c07-8a3a-739241f0bf0d-kube-api-access-kdttm\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.440099 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.440219 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.440384 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.440489 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fce98890-1299-4c07-8a3a-739241f0bf0d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.542008 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.542332 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdttm\" (UniqueName: \"kubernetes.io/projected/fce98890-1299-4c07-8a3a-739241f0bf0d-kube-api-access-kdttm\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.542432 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.542557 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.543102 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.543210 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fce98890-1299-4c07-8a3a-739241f0bf0d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.543444 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fce98890-1299-4c07-8a3a-739241f0bf0d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.546996 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.547030 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.547687 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.548263 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fce98890-1299-4c07-8a3a-739241f0bf0d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.561510 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdttm\" (UniqueName: \"kubernetes.io/projected/fce98890-1299-4c07-8a3a-739241f0bf0d-kube-api-access-kdttm\") pod \"cinder-scheduler-0\" (UID: \"fce98890-1299-4c07-8a3a-739241f0bf0d\") " pod="openstack/cinder-scheduler-0" Feb 17 16:15:37 crc kubenswrapper[4808]: I0217 16:15:37.626385 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 17 16:15:38 crc kubenswrapper[4808]: I0217 16:15:38.176401 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 17 16:15:38 crc kubenswrapper[4808]: I0217 16:15:38.202854 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fce98890-1299-4c07-8a3a-739241f0bf0d","Type":"ContainerStarted","Data":"24e16f7149940a0a18fedd25334887f70fc506f4c985b1c9251d82a4fb9739cc"} Feb 17 16:15:38 crc kubenswrapper[4808]: I0217 16:15:38.205855 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerStarted","Data":"a6b58d8e1d61eb15475898662433c7b6ba1aca7c7f517ddedfbced3c5aaf2a61"} Feb 17 16:15:38 crc kubenswrapper[4808]: I0217 16:15:38.966851 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.069353 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7t4g9"] Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.069595 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" podUID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerName="dnsmasq-dns" containerID="cri-o://f93f51535ebc44c66de2583206f5226e2e1eace05189cb4e738809b8081ce7e1" gracePeriod=10 Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.163258 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37da8fa5-9dda-4e98-9a63-a4c0036e0017" path="/var/lib/kubelet/pods/37da8fa5-9dda-4e98-9a63-a4c0036e0017/volumes" Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.304875 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fce98890-1299-4c07-8a3a-739241f0bf0d","Type":"ContainerStarted","Data":"740216c25dac67fe79b74559a81943d7b0edb6fa56bd4eaac977117b78b06d77"} Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.363135 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerStarted","Data":"f08bbc217988c1d4a683f5088b670b4d5a57e2fdbedee004dcb40bd4e6db140a"} Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.363520 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.366859 4808 generic.go:334] "Generic (PLEG): container finished" podID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerID="f93f51535ebc44c66de2583206f5226e2e1eace05189cb4e738809b8081ce7e1" exitCode=0 Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.366914 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" event={"ID":"abaeb0d0-670e-4a6d-a583-b4885236c73d","Type":"ContainerDied","Data":"f93f51535ebc44c66de2583206f5226e2e1eace05189cb4e738809b8081ce7e1"} Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.389245 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.649568118 podStartE2EDuration="5.389226258s" podCreationTimestamp="2026-02-17 16:15:34 +0000 UTC" firstStartedPulling="2026-02-17 16:15:35.071421423 +0000 UTC m=+1298.587780496" lastFinishedPulling="2026-02-17 16:15:38.811079563 +0000 UTC m=+1302.327438636" observedRunningTime="2026-02-17 16:15:39.387799519 +0000 UTC m=+1302.904158592" watchObservedRunningTime="2026-02-17 16:15:39.389226258 +0000 UTC m=+1302.905585331" Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.801837 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.929160 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-config\") pod \"abaeb0d0-670e-4a6d-a583-b4885236c73d\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.929649 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-nb\") pod \"abaeb0d0-670e-4a6d-a583-b4885236c73d\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.929731 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-swift-storage-0\") pod \"abaeb0d0-670e-4a6d-a583-b4885236c73d\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.929759 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-sb\") pod \"abaeb0d0-670e-4a6d-a583-b4885236c73d\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.929797 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-svc\") pod \"abaeb0d0-670e-4a6d-a583-b4885236c73d\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.929836 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpz7f\" (UniqueName: \"kubernetes.io/projected/abaeb0d0-670e-4a6d-a583-b4885236c73d-kube-api-access-vpz7f\") pod \"abaeb0d0-670e-4a6d-a583-b4885236c73d\" (UID: \"abaeb0d0-670e-4a6d-a583-b4885236c73d\") " Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.941978 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:15:39 crc kubenswrapper[4808]: I0217 16:15:39.948741 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abaeb0d0-670e-4a6d-a583-b4885236c73d-kube-api-access-vpz7f" (OuterVolumeSpecName: "kube-api-access-vpz7f") pod "abaeb0d0-670e-4a6d-a583-b4885236c73d" (UID: "abaeb0d0-670e-4a6d-a583-b4885236c73d"). InnerVolumeSpecName "kube-api-access-vpz7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.019097 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "abaeb0d0-670e-4a6d-a583-b4885236c73d" (UID: "abaeb0d0-670e-4a6d-a583-b4885236c73d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.033290 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vhzz\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-kube-api-access-7vhzz\") pod \"23a1fa53-e668-4800-b54a-904f42d9eb5e\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.033686 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data-custom\") pod \"23a1fa53-e668-4800-b54a-904f42d9eb5e\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.033816 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-scripts\") pod \"23a1fa53-e668-4800-b54a-904f42d9eb5e\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.033873 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-certs\") pod \"23a1fa53-e668-4800-b54a-904f42d9eb5e\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.033896 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data\") pod \"23a1fa53-e668-4800-b54a-904f42d9eb5e\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.033946 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-combined-ca-bundle\") pod \"23a1fa53-e668-4800-b54a-904f42d9eb5e\" (UID: \"23a1fa53-e668-4800-b54a-904f42d9eb5e\") " Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.034403 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.034434 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpz7f\" (UniqueName: \"kubernetes.io/projected/abaeb0d0-670e-4a6d-a583-b4885236c73d-kube-api-access-vpz7f\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.038244 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-kube-api-access-7vhzz" (OuterVolumeSpecName: "kube-api-access-7vhzz") pod "23a1fa53-e668-4800-b54a-904f42d9eb5e" (UID: "23a1fa53-e668-4800-b54a-904f42d9eb5e"). InnerVolumeSpecName "kube-api-access-7vhzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.047186 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "23a1fa53-e668-4800-b54a-904f42d9eb5e" (UID: "23a1fa53-e668-4800-b54a-904f42d9eb5e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.059502 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-certs" (OuterVolumeSpecName: "certs") pod "23a1fa53-e668-4800-b54a-904f42d9eb5e" (UID: "23a1fa53-e668-4800-b54a-904f42d9eb5e"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.079817 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-scripts" (OuterVolumeSpecName: "scripts") pod "23a1fa53-e668-4800-b54a-904f42d9eb5e" (UID: "23a1fa53-e668-4800-b54a-904f42d9eb5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.100731 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data" (OuterVolumeSpecName: "config-data") pod "23a1fa53-e668-4800-b54a-904f42d9eb5e" (UID: "23a1fa53-e668-4800-b54a-904f42d9eb5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.115454 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "abaeb0d0-670e-4a6d-a583-b4885236c73d" (UID: "abaeb0d0-670e-4a6d-a583-b4885236c73d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.135881 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-config" (OuterVolumeSpecName: "config") pod "abaeb0d0-670e-4a6d-a583-b4885236c73d" (UID: "abaeb0d0-670e-4a6d-a583-b4885236c73d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.136700 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.136725 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.136734 4808 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.136742 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.136751 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.136761 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vhzz\" (UniqueName: \"kubernetes.io/projected/23a1fa53-e668-4800-b54a-904f42d9eb5e-kube-api-access-7vhzz\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.136770 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.149027 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "abaeb0d0-670e-4a6d-a583-b4885236c73d" (UID: "abaeb0d0-670e-4a6d-a583-b4885236c73d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.154012 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "abaeb0d0-670e-4a6d-a583-b4885236c73d" (UID: "abaeb0d0-670e-4a6d-a583-b4885236c73d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.154709 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23a1fa53-e668-4800-b54a-904f42d9eb5e" (UID: "23a1fa53-e668-4800-b54a-904f42d9eb5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.238699 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.238743 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/abaeb0d0-670e-4a6d-a583-b4885236c73d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.238753 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23a1fa53-e668-4800-b54a-904f42d9eb5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.300833 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.377462 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" event={"ID":"abaeb0d0-670e-4a6d-a583-b4885236c73d","Type":"ContainerDied","Data":"673b376ab9a6f91954598ab4a63c75d818d8ff65e3bf87016ce8c6e162ed2846"} Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.377512 4808 scope.go:117] "RemoveContainer" containerID="f93f51535ebc44c66de2583206f5226e2e1eace05189cb4e738809b8081ce7e1" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.377640 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-7t4g9" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.392447 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fce98890-1299-4c07-8a3a-739241f0bf0d","Type":"ContainerStarted","Data":"0fbcf3645a02878f7a06725e686b31632542cd58b240a9b71ac9ab3f75c960a2"} Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.399763 4808 generic.go:334] "Generic (PLEG): container finished" podID="23a1fa53-e668-4800-b54a-904f42d9eb5e" containerID="50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94" exitCode=0 Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.399873 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.399921 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"23a1fa53-e668-4800-b54a-904f42d9eb5e","Type":"ContainerDied","Data":"50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94"} Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.399951 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"23a1fa53-e668-4800-b54a-904f42d9eb5e","Type":"ContainerDied","Data":"d486a3a307b0de09a60edde55636666b3342a5903cc110cae3e17e9502f50af9"} Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.413163 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.413143763 podStartE2EDuration="3.413143763s" podCreationTimestamp="2026-02-17 16:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:40.410961734 +0000 UTC m=+1303.927320807" watchObservedRunningTime="2026-02-17 16:15:40.413143763 +0000 UTC m=+1303.929502836" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.443721 4808 scope.go:117] "RemoveContainer" containerID="dddcaac247851948b323e115b84153bfcbcb71436b40ee468a0fbbfe54d676ae" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.444986 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7t4g9"] Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.473307 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-7t4g9"] Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.504642 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.507103 4808 scope.go:117] "RemoveContainer" containerID="50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.554623 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.557778 4808 scope.go:117] "RemoveContainer" containerID="50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94" Feb 17 16:15:40 crc kubenswrapper[4808]: E0217 16:15:40.558950 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94\": container with ID starting with 50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94 not found: ID does not exist" containerID="50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.559001 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94"} err="failed to get container status \"50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94\": rpc error: code = NotFound desc = could not find container \"50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94\": container with ID starting with 50f1247e3e06436abc5b877c08bbabce85a826f30dcdbef9ab02ea5e21f03a94 not found: ID does not exist" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.560646 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:40 crc kubenswrapper[4808]: E0217 16:15:40.561074 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerName="init" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.561092 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerName="init" Feb 17 16:15:40 crc kubenswrapper[4808]: E0217 16:15:40.561107 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23a1fa53-e668-4800-b54a-904f42d9eb5e" containerName="cloudkitty-proc" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.561116 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="23a1fa53-e668-4800-b54a-904f42d9eb5e" containerName="cloudkitty-proc" Feb 17 16:15:40 crc kubenswrapper[4808]: E0217 16:15:40.561150 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerName="dnsmasq-dns" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.561158 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerName="dnsmasq-dns" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.561321 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="abaeb0d0-670e-4a6d-a583-b4885236c73d" containerName="dnsmasq-dns" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.561349 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="23a1fa53-e668-4800-b54a-904f42d9eb5e" containerName="cloudkitty-proc" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.562066 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.567001 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.571297 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.645114 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.645236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-scripts\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.645270 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzh4f\" (UniqueName: \"kubernetes.io/projected/14f49c04-388f-4eeb-be54-cbf3713606db-kube-api-access-nzh4f\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.645566 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-config-data\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.645721 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/14f49c04-388f-4eeb-be54-cbf3713606db-certs\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.645799 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.748524 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-config-data\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.748688 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/14f49c04-388f-4eeb-be54-cbf3713606db-certs\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.748738 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.748811 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.748864 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-scripts\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.748886 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzh4f\" (UniqueName: \"kubernetes.io/projected/14f49c04-388f-4eeb-be54-cbf3713606db-kube-api-access-nzh4f\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.765348 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.766197 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-config-data\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.778433 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.779007 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/14f49c04-388f-4eeb-be54-cbf3713606db-certs\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.782013 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f49c04-388f-4eeb-be54-cbf3713606db-scripts\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.786044 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzh4f\" (UniqueName: \"kubernetes.io/projected/14f49c04-388f-4eeb-be54-cbf3713606db-kube-api-access-nzh4f\") pod \"cloudkitty-proc-0\" (UID: \"14f49c04-388f-4eeb-be54-cbf3713606db\") " pod="openstack/cloudkitty-proc-0" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.791021 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-679dfcbbb9-npbsd" Feb 17 16:15:40 crc kubenswrapper[4808]: I0217 16:15:40.880033 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.171386 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23a1fa53-e668-4800-b54a-904f42d9eb5e" path="/var/lib/kubelet/pods/23a1fa53-e668-4800-b54a-904f42d9eb5e/volumes" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.174185 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abaeb0d0-670e-4a6d-a583-b4885236c73d" path="/var/lib/kubelet/pods/abaeb0d0-670e-4a6d-a583-b4885236c73d/volumes" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.319038 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f445fb886-lsqq4" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.390641 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75bd7dcff4-tfcmj"] Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.390850 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75bd7dcff4-tfcmj" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api-log" containerID="cri-o://8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1" gracePeriod=30 Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.391068 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75bd7dcff4-tfcmj" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api" containerID="cri-o://6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13" gracePeriod=30 Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.569678 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.632625 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.633934 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.641460 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.642135 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-zgf6f" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.642840 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.660095 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.682737 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce308e0-2ba0-41ae-8760-e749c8d04130-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.682982 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5ce308e0-2ba0-41ae-8760-e749c8d04130-openstack-config-secret\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.683120 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5ce308e0-2ba0-41ae-8760-e749c8d04130-openstack-config\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.683328 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbwbl\" (UniqueName: \"kubernetes.io/projected/5ce308e0-2ba0-41ae-8760-e749c8d04130-kube-api-access-rbwbl\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.786800 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce308e0-2ba0-41ae-8760-e749c8d04130-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.786866 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5ce308e0-2ba0-41ae-8760-e749c8d04130-openstack-config-secret\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.786889 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5ce308e0-2ba0-41ae-8760-e749c8d04130-openstack-config\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.786942 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbwbl\" (UniqueName: \"kubernetes.io/projected/5ce308e0-2ba0-41ae-8760-e749c8d04130-kube-api-access-rbwbl\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.792660 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce308e0-2ba0-41ae-8760-e749c8d04130-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.792767 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5ce308e0-2ba0-41ae-8760-e749c8d04130-openstack-config\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.796700 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5ce308e0-2ba0-41ae-8760-e749c8d04130-openstack-config-secret\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.811351 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbwbl\" (UniqueName: \"kubernetes.io/projected/5ce308e0-2ba0-41ae-8760-e749c8d04130-kube-api-access-rbwbl\") pod \"openstackclient\" (UID: \"5ce308e0-2ba0-41ae-8760-e749c8d04130\") " pod="openstack/openstackclient" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.829389 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.920835 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-76b995d5cb-7xs25" Feb 17 16:15:41 crc kubenswrapper[4808]: I0217 16:15:41.960517 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.436885 4808 generic.go:334] "Generic (PLEG): container finished" podID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerID="8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1" exitCode=143 Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.437330 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bd7dcff4-tfcmj" event={"ID":"bd86efad-8ad2-4e38-b731-5f892d34a582","Type":"ContainerDied","Data":"8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1"} Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.440783 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"14f49c04-388f-4eeb-be54-cbf3713606db","Type":"ContainerStarted","Data":"1cbdb125da22ef63042e5aa9e2d4e26a8cd2f8c72f544f58ee1d82a4a0ba7b17"} Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.440814 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"14f49c04-388f-4eeb-be54-cbf3713606db","Type":"ContainerStarted","Data":"231b7739e843cae1aa504dfabcdb94cf556cd3fc4ee799cde98951ab165c4bf7"} Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.491211 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.491194532 podStartE2EDuration="2.491194532s" podCreationTimestamp="2026-02-17 16:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:42.453648545 +0000 UTC m=+1305.970007618" watchObservedRunningTime="2026-02-17 16:15:42.491194532 +0000 UTC m=+1306.007553605" Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.552756 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.606250 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 17 16:15:42 crc kubenswrapper[4808]: I0217 16:15:42.630741 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 17 16:15:43 crc kubenswrapper[4808]: I0217 16:15:43.456927 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5ce308e0-2ba0-41ae-8760-e749c8d04130","Type":"ContainerStarted","Data":"842197a478f5f020ab22c11d7648ef4ee7379a947af34e2df48b686f2efc6dd2"} Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.215501 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.258756 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data\") pod \"bd86efad-8ad2-4e38-b731-5f892d34a582\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.258842 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd86efad-8ad2-4e38-b731-5f892d34a582-logs\") pod \"bd86efad-8ad2-4e38-b731-5f892d34a582\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.258883 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-combined-ca-bundle\") pod \"bd86efad-8ad2-4e38-b731-5f892d34a582\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.258950 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krq8t\" (UniqueName: \"kubernetes.io/projected/bd86efad-8ad2-4e38-b731-5f892d34a582-kube-api-access-krq8t\") pod \"bd86efad-8ad2-4e38-b731-5f892d34a582\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.259079 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data-custom\") pod \"bd86efad-8ad2-4e38-b731-5f892d34a582\" (UID: \"bd86efad-8ad2-4e38-b731-5f892d34a582\") " Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.262961 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd86efad-8ad2-4e38-b731-5f892d34a582-logs" (OuterVolumeSpecName: "logs") pod "bd86efad-8ad2-4e38-b731-5f892d34a582" (UID: "bd86efad-8ad2-4e38-b731-5f892d34a582"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.265477 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd86efad-8ad2-4e38-b731-5f892d34a582-kube-api-access-krq8t" (OuterVolumeSpecName: "kube-api-access-krq8t") pod "bd86efad-8ad2-4e38-b731-5f892d34a582" (UID: "bd86efad-8ad2-4e38-b731-5f892d34a582"). InnerVolumeSpecName "kube-api-access-krq8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.269727 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bd86efad-8ad2-4e38-b731-5f892d34a582" (UID: "bd86efad-8ad2-4e38-b731-5f892d34a582"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.318790 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd86efad-8ad2-4e38-b731-5f892d34a582" (UID: "bd86efad-8ad2-4e38-b731-5f892d34a582"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.362547 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd86efad-8ad2-4e38-b731-5f892d34a582-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.362591 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.362602 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krq8t\" (UniqueName: \"kubernetes.io/projected/bd86efad-8ad2-4e38-b731-5f892d34a582-kube-api-access-krq8t\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.362612 4808 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.379885 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data" (OuterVolumeSpecName: "config-data") pod "bd86efad-8ad2-4e38-b731-5f892d34a582" (UID: "bd86efad-8ad2-4e38-b731-5f892d34a582"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.463955 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd86efad-8ad2-4e38-b731-5f892d34a582-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.480183 4808 generic.go:334] "Generic (PLEG): container finished" podID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerID="6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13" exitCode=0 Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.480235 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bd7dcff4-tfcmj" event={"ID":"bd86efad-8ad2-4e38-b731-5f892d34a582","Type":"ContainerDied","Data":"6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13"} Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.480253 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75bd7dcff4-tfcmj" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.480275 4808 scope.go:117] "RemoveContainer" containerID="6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.480263 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75bd7dcff4-tfcmj" event={"ID":"bd86efad-8ad2-4e38-b731-5f892d34a582","Type":"ContainerDied","Data":"5dc94be747fd1b78b9a66a8cfe5962566975f11bb39b1a72c4640a142fb1468d"} Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.518269 4808 scope.go:117] "RemoveContainer" containerID="8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.523196 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75bd7dcff4-tfcmj"] Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.533272 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-75bd7dcff4-tfcmj"] Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.553824 4808 scope.go:117] "RemoveContainer" containerID="6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13" Feb 17 16:15:45 crc kubenswrapper[4808]: E0217 16:15:45.554350 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13\": container with ID starting with 6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13 not found: ID does not exist" containerID="6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.554388 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13"} err="failed to get container status \"6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13\": rpc error: code = NotFound desc = could not find container \"6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13\": container with ID starting with 6b29334979377aae11d80c31ca2d701fe0397a6ebb1d0f68188d0b3c533f4e13 not found: ID does not exist" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.554416 4808 scope.go:117] "RemoveContainer" containerID="8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1" Feb 17 16:15:45 crc kubenswrapper[4808]: E0217 16:15:45.555024 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1\": container with ID starting with 8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1 not found: ID does not exist" containerID="8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1" Feb 17 16:15:45 crc kubenswrapper[4808]: I0217 16:15:45.555076 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1"} err="failed to get container status \"8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1\": rpc error: code = NotFound desc = could not find container \"8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1\": container with ID starting with 8e81ed5ac5da2865c2bd786f6e608662f1f3114d1959d90beba10db5607a33f1 not found: ID does not exist" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.728963 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c6489dbc7-2ddnw" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.787892 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-tmj75"] Feb 17 16:15:46 crc kubenswrapper[4808]: E0217 16:15:46.799686 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api-log" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.799730 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api-log" Feb 17 16:15:46 crc kubenswrapper[4808]: E0217 16:15:46.799776 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.799785 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.800156 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.800172 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" containerName="barbican-api-log" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.801085 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.811288 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c8b8554dd-86wnt"] Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.812257 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c8b8554dd-86wnt" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-api" containerID="cri-o://f3f7fd1ba085d42fb2a1208d784040ea1e2e45a48ec8b1c70c8122235d3614aa" gracePeriod=30 Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.812522 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c8b8554dd-86wnt" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-httpd" containerID="cri-o://6fb4ffeac0605961472d3b2de8b2dce4344cba69b4920dc698cb1b861244c6eb" gracePeriod=30 Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.870830 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tmj75"] Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.890900 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8jmx\" (UniqueName: \"kubernetes.io/projected/785bc852-9af8-4d44-9c07-a7b501efb72c-kube-api-access-g8jmx\") pod \"nova-api-db-create-tmj75\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.890966 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785bc852-9af8-4d44-9c07-a7b501efb72c-operator-scripts\") pod \"nova-api-db-create-tmj75\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.906016 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-bmg4x"] Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.907311 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.937992 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bmg4x"] Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.971987 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-7e6f-account-create-update-zcm7d"] Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.973428 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.977181 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.993535 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhqjl\" (UniqueName: \"kubernetes.io/projected/adb98158-8a64-4a24-9d8a-5c7308881c79-kube-api-access-qhqjl\") pod \"nova-api-7e6f-account-create-update-zcm7d\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.993665 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb98158-8a64-4a24-9d8a-5c7308881c79-operator-scripts\") pod \"nova-api-7e6f-account-create-update-zcm7d\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.993725 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8jmx\" (UniqueName: \"kubernetes.io/projected/785bc852-9af8-4d44-9c07-a7b501efb72c-kube-api-access-g8jmx\") pod \"nova-api-db-create-tmj75\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.993778 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmqtm\" (UniqueName: \"kubernetes.io/projected/84bc7003-1a29-41b6-af75-956706dd0efe-kube-api-access-pmqtm\") pod \"nova-cell0-db-create-bmg4x\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.993877 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84bc7003-1a29-41b6-af75-956706dd0efe-operator-scripts\") pod \"nova-cell0-db-create-bmg4x\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.993926 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785bc852-9af8-4d44-9c07-a7b501efb72c-operator-scripts\") pod \"nova-api-db-create-tmj75\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.994717 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785bc852-9af8-4d44-9c07-a7b501efb72c-operator-scripts\") pod \"nova-api-db-create-tmj75\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:46 crc kubenswrapper[4808]: I0217 16:15:46.995019 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7e6f-account-create-update-zcm7d"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.016761 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8jmx\" (UniqueName: \"kubernetes.io/projected/785bc852-9af8-4d44-9c07-a7b501efb72c-kube-api-access-g8jmx\") pod \"nova-api-db-create-tmj75\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.095224 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhqjl\" (UniqueName: \"kubernetes.io/projected/adb98158-8a64-4a24-9d8a-5c7308881c79-kube-api-access-qhqjl\") pod \"nova-api-7e6f-account-create-update-zcm7d\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.095277 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb98158-8a64-4a24-9d8a-5c7308881c79-operator-scripts\") pod \"nova-api-7e6f-account-create-update-zcm7d\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.095354 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmqtm\" (UniqueName: \"kubernetes.io/projected/84bc7003-1a29-41b6-af75-956706dd0efe-kube-api-access-pmqtm\") pod \"nova-cell0-db-create-bmg4x\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.095375 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84bc7003-1a29-41b6-af75-956706dd0efe-operator-scripts\") pod \"nova-cell0-db-create-bmg4x\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.096076 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84bc7003-1a29-41b6-af75-956706dd0efe-operator-scripts\") pod \"nova-cell0-db-create-bmg4x\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.096103 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-drbdx"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.098063 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb98158-8a64-4a24-9d8a-5c7308881c79-operator-scripts\") pod \"nova-api-7e6f-account-create-update-zcm7d\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.099015 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.107917 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0369-account-create-update-hd6gb"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.109193 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.111970 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.114506 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhqjl\" (UniqueName: \"kubernetes.io/projected/adb98158-8a64-4a24-9d8a-5c7308881c79-kube-api-access-qhqjl\") pod \"nova-api-7e6f-account-create-update-zcm7d\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.115042 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmqtm\" (UniqueName: \"kubernetes.io/projected/84bc7003-1a29-41b6-af75-956706dd0efe-kube-api-access-pmqtm\") pod \"nova-cell0-db-create-bmg4x\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.129426 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-drbdx"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.135157 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.144267 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0369-account-create-update-hd6gb"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.168607 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd86efad-8ad2-4e38-b731-5f892d34a582" path="/var/lib/kubelet/pods/bd86efad-8ad2-4e38-b731-5f892d34a582/volumes" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.197753 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6cd1abe-7b23-494f-b22f-b355f5937f82-operator-scripts\") pod \"nova-cell0-0369-account-create-update-hd6gb\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.198004 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27rbd\" (UniqueName: \"kubernetes.io/projected/b6543f3f-c70d-4258-b1f3-b74458b60153-kube-api-access-27rbd\") pod \"nova-cell1-db-create-drbdx\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.198082 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njp8m\" (UniqueName: \"kubernetes.io/projected/c6cd1abe-7b23-494f-b22f-b355f5937f82-kube-api-access-njp8m\") pod \"nova-cell0-0369-account-create-update-hd6gb\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.198109 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6543f3f-c70d-4258-b1f3-b74458b60153-operator-scripts\") pod \"nova-cell1-db-create-drbdx\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.226099 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.300108 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-490b-account-create-update-7wjkg"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.300328 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27rbd\" (UniqueName: \"kubernetes.io/projected/b6543f3f-c70d-4258-b1f3-b74458b60153-kube-api-access-27rbd\") pod \"nova-cell1-db-create-drbdx\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.300493 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njp8m\" (UniqueName: \"kubernetes.io/projected/c6cd1abe-7b23-494f-b22f-b355f5937f82-kube-api-access-njp8m\") pod \"nova-cell0-0369-account-create-update-hd6gb\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.300593 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6543f3f-c70d-4258-b1f3-b74458b60153-operator-scripts\") pod \"nova-cell1-db-create-drbdx\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.300677 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6cd1abe-7b23-494f-b22f-b355f5937f82-operator-scripts\") pod \"nova-cell0-0369-account-create-update-hd6gb\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.301424 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6543f3f-c70d-4258-b1f3-b74458b60153-operator-scripts\") pod \"nova-cell1-db-create-drbdx\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.301681 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6cd1abe-7b23-494f-b22f-b355f5937f82-operator-scripts\") pod \"nova-cell0-0369-account-create-update-hd6gb\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.302302 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.304495 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.326105 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njp8m\" (UniqueName: \"kubernetes.io/projected/c6cd1abe-7b23-494f-b22f-b355f5937f82-kube-api-access-njp8m\") pod \"nova-cell0-0369-account-create-update-hd6gb\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.326893 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27rbd\" (UniqueName: \"kubernetes.io/projected/b6543f3f-c70d-4258-b1f3-b74458b60153-kube-api-access-27rbd\") pod \"nova-cell1-db-create-drbdx\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.329366 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-490b-account-create-update-7wjkg"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.402781 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad0fdf2-2880-4568-87b0-6319f864c348-operator-scripts\") pod \"nova-cell1-490b-account-create-update-7wjkg\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.403546 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w296r\" (UniqueName: \"kubernetes.io/projected/bad0fdf2-2880-4568-87b0-6319f864c348-kube-api-access-w296r\") pod \"nova-cell1-490b-account-create-update-7wjkg\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.408724 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.506267 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad0fdf2-2880-4568-87b0-6319f864c348-operator-scripts\") pod \"nova-cell1-490b-account-create-update-7wjkg\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.506561 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w296r\" (UniqueName: \"kubernetes.io/projected/bad0fdf2-2880-4568-87b0-6319f864c348-kube-api-access-w296r\") pod \"nova-cell1-490b-account-create-update-7wjkg\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.509081 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad0fdf2-2880-4568-87b0-6319f864c348-operator-scripts\") pod \"nova-cell1-490b-account-create-update-7wjkg\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.522925 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w296r\" (UniqueName: \"kubernetes.io/projected/bad0fdf2-2880-4568-87b0-6319f864c348-kube-api-access-w296r\") pod \"nova-cell1-490b-account-create-update-7wjkg\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.543711 4808 generic.go:334] "Generic (PLEG): container finished" podID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerID="6fb4ffeac0605961472d3b2de8b2dce4344cba69b4920dc698cb1b861244c6eb" exitCode=0 Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.543842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b8554dd-86wnt" event={"ID":"b4b8e73f-b7b0-4580-8e0f-44eef84624e4","Type":"ContainerDied","Data":"6fb4ffeac0605961472d3b2de8b2dce4344cba69b4920dc698cb1b861244c6eb"} Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.566124 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.592093 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.634675 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tmj75"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.639176 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.767749 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bmg4x"] Feb 17 16:15:47 crc kubenswrapper[4808]: I0217 16:15:47.999445 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7e6f-account-create-update-zcm7d"] Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.104762 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.168142 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-drbdx"] Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.235790 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0369-account-create-update-hd6gb"] Feb 17 16:15:48 crc kubenswrapper[4808]: W0217 16:15:48.242803 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6cd1abe_7b23_494f_b22f_b355f5937f82.slice/crio-6fcf5c8c9a435e82fce69581ddd3ecd326525abf323b41292990f134a973e737 WatchSource:0}: Error finding container 6fcf5c8c9a435e82fce69581ddd3ecd326525abf323b41292990f134a973e737: Status 404 returned error can't find the container with id 6fcf5c8c9a435e82fce69581ddd3ecd326525abf323b41292990f134a973e737 Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.280285 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-490b-account-create-update-7wjkg"] Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.558730 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-drbdx" event={"ID":"b6543f3f-c70d-4258-b1f3-b74458b60153","Type":"ContainerStarted","Data":"51791c7cf2f261447e50c08d9d3c4f313629f6102c4610a772dc3de95d2aa336"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.558821 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-drbdx" event={"ID":"b6543f3f-c70d-4258-b1f3-b74458b60153","Type":"ContainerStarted","Data":"8a75933f3031c6b1f8cf8ff6b1411acfe98718f81345fbaa18024575af0bf6ba"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.561342 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" event={"ID":"c6cd1abe-7b23-494f-b22f-b355f5937f82","Type":"ContainerStarted","Data":"4239c263afa33d8fe9b5e50780a3b457b698315d00933f6d44bd070b105665ca"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.561387 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" event={"ID":"c6cd1abe-7b23-494f-b22f-b355f5937f82","Type":"ContainerStarted","Data":"6fcf5c8c9a435e82fce69581ddd3ecd326525abf323b41292990f134a973e737"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.563194 4808 generic.go:334] "Generic (PLEG): container finished" podID="adb98158-8a64-4a24-9d8a-5c7308881c79" containerID="24b6cca39f7f0539540e703e695312278dead1c9fbed89b92d1978c2b31592d9" exitCode=0 Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.563238 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7e6f-account-create-update-zcm7d" event={"ID":"adb98158-8a64-4a24-9d8a-5c7308881c79","Type":"ContainerDied","Data":"24b6cca39f7f0539540e703e695312278dead1c9fbed89b92d1978c2b31592d9"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.563258 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7e6f-account-create-update-zcm7d" event={"ID":"adb98158-8a64-4a24-9d8a-5c7308881c79","Type":"ContainerStarted","Data":"0dc09ac306fc7e2b364ea4b44d5d09a138003a1e81f7a44ecd2f51ed4b1d1b89"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.565314 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" event={"ID":"bad0fdf2-2880-4568-87b0-6319f864c348","Type":"ContainerStarted","Data":"75d3a237cde61df2195413fb2a62d4c02235666e74a55328045b62f08820fc28"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.565340 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" event={"ID":"bad0fdf2-2880-4568-87b0-6319f864c348","Type":"ContainerStarted","Data":"3e57bebfb95b0d9d4f461957a8bd1f2f06012fd271323ebe71abc58fa6b4937e"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.568368 4808 generic.go:334] "Generic (PLEG): container finished" podID="84bc7003-1a29-41b6-af75-956706dd0efe" containerID="8a03cfda6ba1482551fb43a88bb0d456e3e357369b1e584649fa69312e5fe7ab" exitCode=0 Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.568449 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bmg4x" event={"ID":"84bc7003-1a29-41b6-af75-956706dd0efe","Type":"ContainerDied","Data":"8a03cfda6ba1482551fb43a88bb0d456e3e357369b1e584649fa69312e5fe7ab"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.568482 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bmg4x" event={"ID":"84bc7003-1a29-41b6-af75-956706dd0efe","Type":"ContainerStarted","Data":"cf5220fed618b3508a0f2ed78390fae1a7cb088c433552f6ee16c31271e9f9f4"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.572645 4808 generic.go:334] "Generic (PLEG): container finished" podID="785bc852-9af8-4d44-9c07-a7b501efb72c" containerID="202121dae9bdf398a0c42e540c49f3bde76321b020f7cab3e7250c352d974480" exitCode=0 Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.572696 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tmj75" event={"ID":"785bc852-9af8-4d44-9c07-a7b501efb72c","Type":"ContainerDied","Data":"202121dae9bdf398a0c42e540c49f3bde76321b020f7cab3e7250c352d974480"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.572720 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tmj75" event={"ID":"785bc852-9af8-4d44-9c07-a7b501efb72c","Type":"ContainerStarted","Data":"39a847653b65f7a910542af7c8bf6279189cd0c6dc3f5a9660574c5fd3b57fa7"} Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.582128 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-drbdx" podStartSLOduration=1.5821075580000001 podStartE2EDuration="1.582107558s" podCreationTimestamp="2026-02-17 16:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:48.580478265 +0000 UTC m=+1312.096837338" watchObservedRunningTime="2026-02-17 16:15:48.582107558 +0000 UTC m=+1312.098466631" Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.614786 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" podStartSLOduration=1.614769713 podStartE2EDuration="1.614769713s" podCreationTimestamp="2026-02-17 16:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:48.610218409 +0000 UTC m=+1312.126577482" watchObservedRunningTime="2026-02-17 16:15:48.614769713 +0000 UTC m=+1312.131128786" Feb 17 16:15:48 crc kubenswrapper[4808]: I0217 16:15:48.647966 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" podStartSLOduration=1.6479470809999999 podStartE2EDuration="1.647947081s" podCreationTimestamp="2026-02-17 16:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:48.645812663 +0000 UTC m=+1312.162171746" watchObservedRunningTime="2026-02-17 16:15:48.647947081 +0000 UTC m=+1312.164306154" Feb 17 16:15:48 crc kubenswrapper[4808]: E0217 16:15:48.936703 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6543f3f_c70d_4258_b1f3_b74458b60153.slice/crio-conmon-51791c7cf2f261447e50c08d9d3c4f313629f6102c4610a772dc3de95d2aa336.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6543f3f_c70d_4258_b1f3_b74458b60153.slice/crio-51791c7cf2f261447e50c08d9d3c4f313629f6102c4610a772dc3de95d2aa336.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbad0fdf2_2880_4568_87b0_6319f864c348.slice/crio-conmon-75d3a237cde61df2195413fb2a62d4c02235666e74a55328045b62f08820fc28.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:15:49 crc kubenswrapper[4808]: I0217 16:15:49.583543 4808 generic.go:334] "Generic (PLEG): container finished" podID="bad0fdf2-2880-4568-87b0-6319f864c348" containerID="75d3a237cde61df2195413fb2a62d4c02235666e74a55328045b62f08820fc28" exitCode=0 Feb 17 16:15:49 crc kubenswrapper[4808]: I0217 16:15:49.584145 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" event={"ID":"bad0fdf2-2880-4568-87b0-6319f864c348","Type":"ContainerDied","Data":"75d3a237cde61df2195413fb2a62d4c02235666e74a55328045b62f08820fc28"} Feb 17 16:15:49 crc kubenswrapper[4808]: I0217 16:15:49.586944 4808 generic.go:334] "Generic (PLEG): container finished" podID="b6543f3f-c70d-4258-b1f3-b74458b60153" containerID="51791c7cf2f261447e50c08d9d3c4f313629f6102c4610a772dc3de95d2aa336" exitCode=0 Feb 17 16:15:49 crc kubenswrapper[4808]: I0217 16:15:49.587066 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-drbdx" event={"ID":"b6543f3f-c70d-4258-b1f3-b74458b60153","Type":"ContainerDied","Data":"51791c7cf2f261447e50c08d9d3c4f313629f6102c4610a772dc3de95d2aa336"} Feb 17 16:15:49 crc kubenswrapper[4808]: I0217 16:15:49.588653 4808 generic.go:334] "Generic (PLEG): container finished" podID="c6cd1abe-7b23-494f-b22f-b355f5937f82" containerID="4239c263afa33d8fe9b5e50780a3b457b698315d00933f6d44bd070b105665ca" exitCode=0 Feb 17 16:15:49 crc kubenswrapper[4808]: I0217 16:15:49.588794 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" event={"ID":"c6cd1abe-7b23-494f-b22f-b355f5937f82","Type":"ContainerDied","Data":"4239c263afa33d8fe9b5e50780a3b457b698315d00933f6d44bd070b105665ca"} Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.490807 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.491314 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-central-agent" containerID="cri-o://7026f52ab348147acdc0cc1845b030fe4c38003a827c4074efe539c2c13f73e8" gracePeriod=30 Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.491423 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-notification-agent" containerID="cri-o://1475151fb2b9ec40ea170157633c4ee253f1d8d7d5da164ebda9104b80ecbb68" gracePeriod=30 Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.491418 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="sg-core" containerID="cri-o://a6b58d8e1d61eb15475898662433c7b6ba1aca7c7f517ddedfbced3c5aaf2a61" gracePeriod=30 Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.491473 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="proxy-httpd" containerID="cri-o://f08bbc217988c1d4a683f5088b670b4d5a57e2fdbedee004dcb40bd4e6db140a" gracePeriod=30 Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.500758 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.189:3000/\": EOF" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.649161 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-dcfbdc547-54spv"] Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.651077 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.654464 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.654609 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.655115 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.669796 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-dcfbdc547-54spv"] Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.709993 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-run-httpd\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.710048 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5zvx\" (UniqueName: \"kubernetes.io/projected/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-kube-api-access-g5zvx\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.710085 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-etc-swift\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.710108 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-config-data\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.710167 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-internal-tls-certs\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.710188 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-combined-ca-bundle\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.710207 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-log-httpd\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.710250 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-public-tls-certs\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.813713 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-combined-ca-bundle\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.813760 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-log-httpd\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.813814 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-public-tls-certs\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.813868 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-run-httpd\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.813896 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5zvx\" (UniqueName: \"kubernetes.io/projected/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-kube-api-access-g5zvx\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.813929 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-etc-swift\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.813951 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-config-data\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.814008 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-internal-tls-certs\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.815284 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-run-httpd\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.820530 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-log-httpd\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.822181 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-internal-tls-certs\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.823878 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-combined-ca-bundle\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.825082 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-config-data\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.834961 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-public-tls-certs\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.836281 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-etc-swift\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.840290 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5zvx\" (UniqueName: \"kubernetes.io/projected/45097e1f-e6c7-40c1-8338-3f1ac506c3fe-kube-api-access-g5zvx\") pod \"swift-proxy-dcfbdc547-54spv\" (UID: \"45097e1f-e6c7-40c1-8338-3f1ac506c3fe\") " pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:50 crc kubenswrapper[4808]: I0217 16:15:50.972177 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.591777 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.592112 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.611887 4808 generic.go:334] "Generic (PLEG): container finished" podID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerID="f3f7fd1ba085d42fb2a1208d784040ea1e2e45a48ec8b1c70c8122235d3614aa" exitCode=0 Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.611957 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b8554dd-86wnt" event={"ID":"b4b8e73f-b7b0-4580-8e0f-44eef84624e4","Type":"ContainerDied","Data":"f3f7fd1ba085d42fb2a1208d784040ea1e2e45a48ec8b1c70c8122235d3614aa"} Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.615944 4808 generic.go:334] "Generic (PLEG): container finished" podID="ade95199-c613-4920-aa24-6cedde28dda6" containerID="f08bbc217988c1d4a683f5088b670b4d5a57e2fdbedee004dcb40bd4e6db140a" exitCode=0 Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.615970 4808 generic.go:334] "Generic (PLEG): container finished" podID="ade95199-c613-4920-aa24-6cedde28dda6" containerID="a6b58d8e1d61eb15475898662433c7b6ba1aca7c7f517ddedfbced3c5aaf2a61" exitCode=2 Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.615979 4808 generic.go:334] "Generic (PLEG): container finished" podID="ade95199-c613-4920-aa24-6cedde28dda6" containerID="1475151fb2b9ec40ea170157633c4ee253f1d8d7d5da164ebda9104b80ecbb68" exitCode=0 Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.615988 4808 generic.go:334] "Generic (PLEG): container finished" podID="ade95199-c613-4920-aa24-6cedde28dda6" containerID="7026f52ab348147acdc0cc1845b030fe4c38003a827c4074efe539c2c13f73e8" exitCode=0 Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.615971 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerDied","Data":"f08bbc217988c1d4a683f5088b670b4d5a57e2fdbedee004dcb40bd4e6db140a"} Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.616020 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerDied","Data":"a6b58d8e1d61eb15475898662433c7b6ba1aca7c7f517ddedfbced3c5aaf2a61"} Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.616033 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerDied","Data":"1475151fb2b9ec40ea170157633c4ee253f1d8d7d5da164ebda9104b80ecbb68"} Feb 17 16:15:51 crc kubenswrapper[4808]: I0217 16:15:51.616042 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerDied","Data":"7026f52ab348147acdc0cc1845b030fe4c38003a827c4074efe539c2c13f73e8"} Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.656435 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-drbdx" event={"ID":"b6543f3f-c70d-4258-b1f3-b74458b60153","Type":"ContainerDied","Data":"8a75933f3031c6b1f8cf8ff6b1411acfe98718f81345fbaa18024575af0bf6ba"} Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.657192 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a75933f3031c6b1f8cf8ff6b1411acfe98718f81345fbaa18024575af0bf6ba" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.660517 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" event={"ID":"c6cd1abe-7b23-494f-b22f-b355f5937f82","Type":"ContainerDied","Data":"6fcf5c8c9a435e82fce69581ddd3ecd326525abf323b41292990f134a973e737"} Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.660711 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fcf5c8c9a435e82fce69581ddd3ecd326525abf323b41292990f134a973e737" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.665351 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7e6f-account-create-update-zcm7d" event={"ID":"adb98158-8a64-4a24-9d8a-5c7308881c79","Type":"ContainerDied","Data":"0dc09ac306fc7e2b364ea4b44d5d09a138003a1e81f7a44ecd2f51ed4b1d1b89"} Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.665392 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dc09ac306fc7e2b364ea4b44d5d09a138003a1e81f7a44ecd2f51ed4b1d1b89" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.668131 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" event={"ID":"bad0fdf2-2880-4568-87b0-6319f864c348","Type":"ContainerDied","Data":"3e57bebfb95b0d9d4f461957a8bd1f2f06012fd271323ebe71abc58fa6b4937e"} Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.668164 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e57bebfb95b0d9d4f461957a8bd1f2f06012fd271323ebe71abc58fa6b4937e" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.895414 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.910986 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.918108 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.923230 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.931190 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.933866 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njp8m\" (UniqueName: \"kubernetes.io/projected/c6cd1abe-7b23-494f-b22f-b355f5937f82-kube-api-access-njp8m\") pod \"c6cd1abe-7b23-494f-b22f-b355f5937f82\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.933966 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6cd1abe-7b23-494f-b22f-b355f5937f82-operator-scripts\") pod \"c6cd1abe-7b23-494f-b22f-b355f5937f82\" (UID: \"c6cd1abe-7b23-494f-b22f-b355f5937f82\") " Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.936096 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6cd1abe-7b23-494f-b22f-b355f5937f82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6cd1abe-7b23-494f-b22f-b355f5937f82" (UID: "c6cd1abe-7b23-494f-b22f-b355f5937f82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.938793 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6cd1abe-7b23-494f-b22f-b355f5937f82-kube-api-access-njp8m" (OuterVolumeSpecName: "kube-api-access-njp8m") pod "c6cd1abe-7b23-494f-b22f-b355f5937f82" (UID: "c6cd1abe-7b23-494f-b22f-b355f5937f82"). InnerVolumeSpecName "kube-api-access-njp8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:55 crc kubenswrapper[4808]: I0217 16:15:55.941643 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.035727 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6543f3f-c70d-4258-b1f3-b74458b60153-operator-scripts\") pod \"b6543f3f-c70d-4258-b1f3-b74458b60153\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036081 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w296r\" (UniqueName: \"kubernetes.io/projected/bad0fdf2-2880-4568-87b0-6319f864c348-kube-api-access-w296r\") pod \"bad0fdf2-2880-4568-87b0-6319f864c348\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036140 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6543f3f-c70d-4258-b1f3-b74458b60153-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6543f3f-c70d-4258-b1f3-b74458b60153" (UID: "b6543f3f-c70d-4258-b1f3-b74458b60153"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036155 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8jmx\" (UniqueName: \"kubernetes.io/projected/785bc852-9af8-4d44-9c07-a7b501efb72c-kube-api-access-g8jmx\") pod \"785bc852-9af8-4d44-9c07-a7b501efb72c\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036297 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb98158-8a64-4a24-9d8a-5c7308881c79-operator-scripts\") pod \"adb98158-8a64-4a24-9d8a-5c7308881c79\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036490 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmqtm\" (UniqueName: \"kubernetes.io/projected/84bc7003-1a29-41b6-af75-956706dd0efe-kube-api-access-pmqtm\") pod \"84bc7003-1a29-41b6-af75-956706dd0efe\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036512 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785bc852-9af8-4d44-9c07-a7b501efb72c-operator-scripts\") pod \"785bc852-9af8-4d44-9c07-a7b501efb72c\" (UID: \"785bc852-9af8-4d44-9c07-a7b501efb72c\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036541 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhqjl\" (UniqueName: \"kubernetes.io/projected/adb98158-8a64-4a24-9d8a-5c7308881c79-kube-api-access-qhqjl\") pod \"adb98158-8a64-4a24-9d8a-5c7308881c79\" (UID: \"adb98158-8a64-4a24-9d8a-5c7308881c79\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036666 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad0fdf2-2880-4568-87b0-6319f864c348-operator-scripts\") pod \"bad0fdf2-2880-4568-87b0-6319f864c348\" (UID: \"bad0fdf2-2880-4568-87b0-6319f864c348\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036723 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27rbd\" (UniqueName: \"kubernetes.io/projected/b6543f3f-c70d-4258-b1f3-b74458b60153-kube-api-access-27rbd\") pod \"b6543f3f-c70d-4258-b1f3-b74458b60153\" (UID: \"b6543f3f-c70d-4258-b1f3-b74458b60153\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.036766 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84bc7003-1a29-41b6-af75-956706dd0efe-operator-scripts\") pod \"84bc7003-1a29-41b6-af75-956706dd0efe\" (UID: \"84bc7003-1a29-41b6-af75-956706dd0efe\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.037648 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njp8m\" (UniqueName: \"kubernetes.io/projected/c6cd1abe-7b23-494f-b22f-b355f5937f82-kube-api-access-njp8m\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.037668 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6cd1abe-7b23-494f-b22f-b355f5937f82-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.037677 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6543f3f-c70d-4258-b1f3-b74458b60153-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.038084 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adb98158-8a64-4a24-9d8a-5c7308881c79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "adb98158-8a64-4a24-9d8a-5c7308881c79" (UID: "adb98158-8a64-4a24-9d8a-5c7308881c79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.038167 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84bc7003-1a29-41b6-af75-956706dd0efe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "84bc7003-1a29-41b6-af75-956706dd0efe" (UID: "84bc7003-1a29-41b6-af75-956706dd0efe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.039175 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/785bc852-9af8-4d44-9c07-a7b501efb72c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "785bc852-9af8-4d44-9c07-a7b501efb72c" (UID: "785bc852-9af8-4d44-9c07-a7b501efb72c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.039222 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bad0fdf2-2880-4568-87b0-6319f864c348-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bad0fdf2-2880-4568-87b0-6319f864c348" (UID: "bad0fdf2-2880-4568-87b0-6319f864c348"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.042169 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb98158-8a64-4a24-9d8a-5c7308881c79-kube-api-access-qhqjl" (OuterVolumeSpecName: "kube-api-access-qhqjl") pod "adb98158-8a64-4a24-9d8a-5c7308881c79" (UID: "adb98158-8a64-4a24-9d8a-5c7308881c79"). InnerVolumeSpecName "kube-api-access-qhqjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.043794 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad0fdf2-2880-4568-87b0-6319f864c348-kube-api-access-w296r" (OuterVolumeSpecName: "kube-api-access-w296r") pod "bad0fdf2-2880-4568-87b0-6319f864c348" (UID: "bad0fdf2-2880-4568-87b0-6319f864c348"). InnerVolumeSpecName "kube-api-access-w296r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.044919 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6543f3f-c70d-4258-b1f3-b74458b60153-kube-api-access-27rbd" (OuterVolumeSpecName: "kube-api-access-27rbd") pod "b6543f3f-c70d-4258-b1f3-b74458b60153" (UID: "b6543f3f-c70d-4258-b1f3-b74458b60153"). InnerVolumeSpecName "kube-api-access-27rbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.045024 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785bc852-9af8-4d44-9c07-a7b501efb72c-kube-api-access-g8jmx" (OuterVolumeSpecName: "kube-api-access-g8jmx") pod "785bc852-9af8-4d44-9c07-a7b501efb72c" (UID: "785bc852-9af8-4d44-9c07-a7b501efb72c"). InnerVolumeSpecName "kube-api-access-g8jmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.046830 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84bc7003-1a29-41b6-af75-956706dd0efe-kube-api-access-pmqtm" (OuterVolumeSpecName: "kube-api-access-pmqtm") pod "84bc7003-1a29-41b6-af75-956706dd0efe" (UID: "84bc7003-1a29-41b6-af75-956706dd0efe"). InnerVolumeSpecName "kube-api-access-pmqtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.140370 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmqtm\" (UniqueName: \"kubernetes.io/projected/84bc7003-1a29-41b6-af75-956706dd0efe-kube-api-access-pmqtm\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.140704 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/785bc852-9af8-4d44-9c07-a7b501efb72c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.140838 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhqjl\" (UniqueName: \"kubernetes.io/projected/adb98158-8a64-4a24-9d8a-5c7308881c79-kube-api-access-qhqjl\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.141129 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bad0fdf2-2880-4568-87b0-6319f864c348-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.141268 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27rbd\" (UniqueName: \"kubernetes.io/projected/b6543f3f-c70d-4258-b1f3-b74458b60153-kube-api-access-27rbd\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.141435 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/84bc7003-1a29-41b6-af75-956706dd0efe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.141470 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w296r\" (UniqueName: \"kubernetes.io/projected/bad0fdf2-2880-4568-87b0-6319f864c348-kube-api-access-w296r\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.141494 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8jmx\" (UniqueName: \"kubernetes.io/projected/785bc852-9af8-4d44-9c07-a7b501efb72c-kube-api-access-g8jmx\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.141518 4808 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/adb98158-8a64-4a24-9d8a-5c7308881c79-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.219675 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.244023 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-combined-ca-bundle\") pod \"ade95199-c613-4920-aa24-6cedde28dda6\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.244069 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-run-httpd\") pod \"ade95199-c613-4920-aa24-6cedde28dda6\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.244103 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-scripts\") pod \"ade95199-c613-4920-aa24-6cedde28dda6\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.244159 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-log-httpd\") pod \"ade95199-c613-4920-aa24-6cedde28dda6\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.244180 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-sg-core-conf-yaml\") pod \"ade95199-c613-4920-aa24-6cedde28dda6\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.244223 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-config-data\") pod \"ade95199-c613-4920-aa24-6cedde28dda6\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.244294 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcrg4\" (UniqueName: \"kubernetes.io/projected/ade95199-c613-4920-aa24-6cedde28dda6-kube-api-access-rcrg4\") pod \"ade95199-c613-4920-aa24-6cedde28dda6\" (UID: \"ade95199-c613-4920-aa24-6cedde28dda6\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.248296 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ade95199-c613-4920-aa24-6cedde28dda6" (UID: "ade95199-c613-4920-aa24-6cedde28dda6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.251767 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ade95199-c613-4920-aa24-6cedde28dda6" (UID: "ade95199-c613-4920-aa24-6cedde28dda6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.252040 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-scripts" (OuterVolumeSpecName: "scripts") pod "ade95199-c613-4920-aa24-6cedde28dda6" (UID: "ade95199-c613-4920-aa24-6cedde28dda6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.254838 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade95199-c613-4920-aa24-6cedde28dda6-kube-api-access-rcrg4" (OuterVolumeSpecName: "kube-api-access-rcrg4") pod "ade95199-c613-4920-aa24-6cedde28dda6" (UID: "ade95199-c613-4920-aa24-6cedde28dda6"). InnerVolumeSpecName "kube-api-access-rcrg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.305755 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.306022 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ade95199-c613-4920-aa24-6cedde28dda6" (UID: "ade95199-c613-4920-aa24-6cedde28dda6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.348282 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-config\") pod \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.348368 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnm4z\" (UniqueName: \"kubernetes.io/projected/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-kube-api-access-wnm4z\") pod \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.348389 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-combined-ca-bundle\") pod \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.348500 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-ovndb-tls-certs\") pod \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.348566 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-httpd-config\") pod \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\" (UID: \"b4b8e73f-b7b0-4580-8e0f-44eef84624e4\") " Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.349139 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.349158 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.349170 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.349185 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcrg4\" (UniqueName: \"kubernetes.io/projected/ade95199-c613-4920-aa24-6cedde28dda6-kube-api-access-rcrg4\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.349196 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ade95199-c613-4920-aa24-6cedde28dda6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.366092 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-kube-api-access-wnm4z" (OuterVolumeSpecName: "kube-api-access-wnm4z") pod "b4b8e73f-b7b0-4580-8e0f-44eef84624e4" (UID: "b4b8e73f-b7b0-4580-8e0f-44eef84624e4"). InnerVolumeSpecName "kube-api-access-wnm4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.366099 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b4b8e73f-b7b0-4580-8e0f-44eef84624e4" (UID: "b4b8e73f-b7b0-4580-8e0f-44eef84624e4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.402598 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ade95199-c613-4920-aa24-6cedde28dda6" (UID: "ade95199-c613-4920-aa24-6cedde28dda6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.421755 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-config-data" (OuterVolumeSpecName: "config-data") pod "ade95199-c613-4920-aa24-6cedde28dda6" (UID: "ade95199-c613-4920-aa24-6cedde28dda6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.434489 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-config" (OuterVolumeSpecName: "config") pod "b4b8e73f-b7b0-4580-8e0f-44eef84624e4" (UID: "b4b8e73f-b7b0-4580-8e0f-44eef84624e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.438771 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4b8e73f-b7b0-4580-8e0f-44eef84624e4" (UID: "b4b8e73f-b7b0-4580-8e0f-44eef84624e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.450262 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b4b8e73f-b7b0-4580-8e0f-44eef84624e4" (UID: "b4b8e73f-b7b0-4580-8e0f-44eef84624e4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.451986 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnm4z\" (UniqueName: \"kubernetes.io/projected/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-kube-api-access-wnm4z\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.452012 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.452020 4808 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.452031 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.452041 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.452054 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade95199-c613-4920-aa24-6cedde28dda6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.452062 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4b8e73f-b7b0-4580-8e0f-44eef84624e4-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.510529 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-dcfbdc547-54spv"] Feb 17 16:15:56 crc kubenswrapper[4808]: W0217 16:15:56.515494 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45097e1f_e6c7_40c1_8338_3f1ac506c3fe.slice/crio-b0dbab620023f457e61bb422dc35d5955af6d5e8f4821b2d804b7dd5cc9caab5 WatchSource:0}: Error finding container b0dbab620023f457e61bb422dc35d5955af6d5e8f4821b2d804b7dd5cc9caab5: Status 404 returned error can't find the container with id b0dbab620023f457e61bb422dc35d5955af6d5e8f4821b2d804b7dd5cc9caab5 Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.688303 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tmj75" event={"ID":"785bc852-9af8-4d44-9c07-a7b501efb72c","Type":"ContainerDied","Data":"39a847653b65f7a910542af7c8bf6279189cd0c6dc3f5a9660574c5fd3b57fa7"} Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.688560 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39a847653b65f7a910542af7c8bf6279189cd0c6dc3f5a9660574c5fd3b57fa7" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.688713 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tmj75" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.692863 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c8b8554dd-86wnt" event={"ID":"b4b8e73f-b7b0-4580-8e0f-44eef84624e4","Type":"ContainerDied","Data":"37ecb8a325939b5e585da0c83aac7cd196aa16f8c7e46e0941abecb0dea07a08"} Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.692912 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c8b8554dd-86wnt" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.692921 4808 scope.go:117] "RemoveContainer" containerID="6fb4ffeac0605961472d3b2de8b2dce4344cba69b4920dc698cb1b861244c6eb" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.696626 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ade95199-c613-4920-aa24-6cedde28dda6","Type":"ContainerDied","Data":"356af2c8c1b6e4c7feb3f6d92a6b8bd00153587c6186bbe593c45d6ad9a2caaf"} Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.696673 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.699360 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5ce308e0-2ba0-41ae-8760-e749c8d04130","Type":"ContainerStarted","Data":"0439c1b605810f673e651f06c93177fa20814d1c29ae34ee315d15a1a316426a"} Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.700336 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dcfbdc547-54spv" event={"ID":"45097e1f-e6c7-40c1-8338-3f1ac506c3fe","Type":"ContainerStarted","Data":"b0dbab620023f457e61bb422dc35d5955af6d5e8f4821b2d804b7dd5cc9caab5"} Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.701520 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0369-account-create-update-hd6gb" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.701580 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bmg4x" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.701597 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-drbdx" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.701613 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bmg4x" event={"ID":"84bc7003-1a29-41b6-af75-956706dd0efe","Type":"ContainerDied","Data":"cf5220fed618b3508a0f2ed78390fae1a7cb088c433552f6ee16c31271e9f9f4"} Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.701630 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf5220fed618b3508a0f2ed78390fae1a7cb088c433552f6ee16c31271e9f9f4" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.701632 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-490b-account-create-update-7wjkg" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.701662 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7e6f-account-create-update-zcm7d" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.720245 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.523250086 podStartE2EDuration="15.720211597s" podCreationTimestamp="2026-02-17 16:15:41 +0000 UTC" firstStartedPulling="2026-02-17 16:15:42.55023311 +0000 UTC m=+1306.066592173" lastFinishedPulling="2026-02-17 16:15:55.747194611 +0000 UTC m=+1319.263553684" observedRunningTime="2026-02-17 16:15:56.715782518 +0000 UTC m=+1320.232141591" watchObservedRunningTime="2026-02-17 16:15:56.720211597 +0000 UTC m=+1320.236570670" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.746311 4808 scope.go:117] "RemoveContainer" containerID="f3f7fd1ba085d42fb2a1208d784040ea1e2e45a48ec8b1c70c8122235d3614aa" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.804544 4808 scope.go:117] "RemoveContainer" containerID="f08bbc217988c1d4a683f5088b670b4d5a57e2fdbedee004dcb40bd4e6db140a" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.808419 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c8b8554dd-86wnt"] Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.834626 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c8b8554dd-86wnt"] Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.847421 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.853141 4808 scope.go:117] "RemoveContainer" containerID="a6b58d8e1d61eb15475898662433c7b6ba1aca7c7f517ddedfbced3c5aaf2a61" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.865705 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874407 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874827 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-central-agent" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874846 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-central-agent" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874861 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-httpd" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874868 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-httpd" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874886 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-api" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874892 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-api" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874906 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785bc852-9af8-4d44-9c07-a7b501efb72c" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874912 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="785bc852-9af8-4d44-9c07-a7b501efb72c" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874920 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb98158-8a64-4a24-9d8a-5c7308881c79" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874928 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb98158-8a64-4a24-9d8a-5c7308881c79" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874942 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84bc7003-1a29-41b6-af75-956706dd0efe" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874949 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="84bc7003-1a29-41b6-af75-956706dd0efe" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874962 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6543f3f-c70d-4258-b1f3-b74458b60153" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874967 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6543f3f-c70d-4258-b1f3-b74458b60153" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874980 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-notification-agent" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.874986 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-notification-agent" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.874997 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="proxy-httpd" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875003 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="proxy-httpd" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.875012 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad0fdf2-2880-4568-87b0-6319f864c348" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875019 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad0fdf2-2880-4568-87b0-6319f864c348" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.875027 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="sg-core" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875034 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="sg-core" Feb 17 16:15:56 crc kubenswrapper[4808]: E0217 16:15:56.875043 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6cd1abe-7b23-494f-b22f-b355f5937f82" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875049 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6cd1abe-7b23-494f-b22f-b355f5937f82" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875235 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad0fdf2-2880-4568-87b0-6319f864c348" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875246 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-api" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875258 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="proxy-httpd" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875270 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" containerName="neutron-httpd" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875283 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-notification-agent" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875290 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6cd1abe-7b23-494f-b22f-b355f5937f82" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875298 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="ceilometer-central-agent" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875308 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade95199-c613-4920-aa24-6cedde28dda6" containerName="sg-core" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875318 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb98158-8a64-4a24-9d8a-5c7308881c79" containerName="mariadb-account-create-update" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875329 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="84bc7003-1a29-41b6-af75-956706dd0efe" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875335 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="785bc852-9af8-4d44-9c07-a7b501efb72c" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.875344 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6543f3f-c70d-4258-b1f3-b74458b60153" containerName="mariadb-database-create" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.877365 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.879607 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.879776 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.883750 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.893490 4808 scope.go:117] "RemoveContainer" containerID="1475151fb2b9ec40ea170157633c4ee253f1d8d7d5da164ebda9104b80ecbb68" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.923202 4808 scope.go:117] "RemoveContainer" containerID="7026f52ab348147acdc0cc1845b030fe4c38003a827c4074efe539c2c13f73e8" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.963133 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-log-httpd\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.963196 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-run-httpd\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.963246 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-scripts\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.963309 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-config-data\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.963547 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc4cr\" (UniqueName: \"kubernetes.io/projected/b26053b6-532d-42e0-84a8-9ad29e1168d3-kube-api-access-wc4cr\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.963724 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:56 crc kubenswrapper[4808]: I0217 16:15:56.963782 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.065308 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc4cr\" (UniqueName: \"kubernetes.io/projected/b26053b6-532d-42e0-84a8-9ad29e1168d3-kube-api-access-wc4cr\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.065685 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.065706 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.065799 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-log-httpd\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.065821 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-run-httpd\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.065849 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-scripts\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.065871 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-config-data\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.066310 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-log-httpd\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.066510 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-run-httpd\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.071724 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.072341 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-scripts\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.073235 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-config-data\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.078799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.083845 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc4cr\" (UniqueName: \"kubernetes.io/projected/b26053b6-532d-42e0-84a8-9ad29e1168d3-kube-api-access-wc4cr\") pod \"ceilometer-0\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.165566 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade95199-c613-4920-aa24-6cedde28dda6" path="/var/lib/kubelet/pods/ade95199-c613-4920-aa24-6cedde28dda6/volumes" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.166519 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b8e73f-b7b0-4580-8e0f-44eef84624e4" path="/var/lib/kubelet/pods/b4b8e73f-b7b0-4580-8e0f-44eef84624e4/volumes" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.194310 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.698853 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:57 crc kubenswrapper[4808]: W0217 16:15:57.700449 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb26053b6_532d_42e0_84a8_9ad29e1168d3.slice/crio-a81d691f61912aaa98c6eb558cf89221dca2d88f6d8316dfd3364666d1a3bef8 WatchSource:0}: Error finding container a81d691f61912aaa98c6eb558cf89221dca2d88f6d8316dfd3364666d1a3bef8: Status 404 returned error can't find the container with id a81d691f61912aaa98c6eb558cf89221dca2d88f6d8316dfd3364666d1a3bef8 Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.703769 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.738470 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerStarted","Data":"a81d691f61912aaa98c6eb558cf89221dca2d88f6d8316dfd3364666d1a3bef8"} Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.744720 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dcfbdc547-54spv" event={"ID":"45097e1f-e6c7-40c1-8338-3f1ac506c3fe","Type":"ContainerStarted","Data":"7792b065ae0edf8db1757f3f3b9f6fbd9960bdac27171c26a8590ad7277582da"} Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.744768 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-dcfbdc547-54spv" event={"ID":"45097e1f-e6c7-40c1-8338-3f1ac506c3fe","Type":"ContainerStarted","Data":"dec1f5b8a7b4d282b15f0cb2e044c9ba55004eb023fff21ce9494f27f7d32dd6"} Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.744826 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.747895 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:15:57 crc kubenswrapper[4808]: I0217 16:15:57.781622 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-dcfbdc547-54spv" podStartSLOduration=7.781597437 podStartE2EDuration="7.781597437s" podCreationTimestamp="2026-02-17 16:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:15:57.774418843 +0000 UTC m=+1321.290777966" watchObservedRunningTime="2026-02-17 16:15:57.781597437 +0000 UTC m=+1321.297956520" Feb 17 16:15:59 crc kubenswrapper[4808]: I0217 16:15:59.239443 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:15:59 crc kubenswrapper[4808]: I0217 16:15:59.776874 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerStarted","Data":"26452d6ca1aa9de491489e0904eac549f1df8fca08d5c4e57d5f1ca767c331fd"} Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.170203 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrx8j"] Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.172002 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.181043 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.181303 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tcmz6" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.181554 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.191784 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrx8j"] Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.285968 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.286083 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-scripts\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.286133 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fwrj\" (UniqueName: \"kubernetes.io/projected/a276997e-b8ab-4b5a-ac5f-c21a8114d673-kube-api-access-2fwrj\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.286204 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-config-data\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.388549 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-config-data\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.389003 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.389203 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-scripts\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.389357 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fwrj\" (UniqueName: \"kubernetes.io/projected/a276997e-b8ab-4b5a-ac5f-c21a8114d673-kube-api-access-2fwrj\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.395168 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-scripts\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.395622 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.395666 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-config-data\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.406075 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fwrj\" (UniqueName: \"kubernetes.io/projected/a276997e-b8ab-4b5a-ac5f-c21a8114d673-kube-api-access-2fwrj\") pod \"nova-cell0-conductor-db-sync-zrx8j\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.494129 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:02 crc kubenswrapper[4808]: I0217 16:16:02.832833 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerStarted","Data":"0859f5931b4f6911204f39fb8dca910ef06274861a3a534de924c3a3792b5888"} Feb 17 16:16:03 crc kubenswrapper[4808]: W0217 16:16:03.026276 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda276997e_b8ab_4b5a_ac5f_c21a8114d673.slice/crio-268e843d688bb610fddbc979618a94257055f1aecd4284dda615a689b1e070c5 WatchSource:0}: Error finding container 268e843d688bb610fddbc979618a94257055f1aecd4284dda615a689b1e070c5: Status 404 returned error can't find the container with id 268e843d688bb610fddbc979618a94257055f1aecd4284dda615a689b1e070c5 Feb 17 16:16:03 crc kubenswrapper[4808]: I0217 16:16:03.026296 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrx8j"] Feb 17 16:16:03 crc kubenswrapper[4808]: I0217 16:16:03.859274 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" event={"ID":"a276997e-b8ab-4b5a-ac5f-c21a8114d673","Type":"ContainerStarted","Data":"268e843d688bb610fddbc979618a94257055f1aecd4284dda615a689b1e070c5"} Feb 17 16:16:04 crc kubenswrapper[4808]: I0217 16:16:04.870011 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerStarted","Data":"aae377a74573763676b86b70c1c3f0564761605238764edc050e4bcbb700450d"} Feb 17 16:16:05 crc kubenswrapper[4808]: I0217 16:16:05.980818 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:16:05 crc kubenswrapper[4808]: I0217 16:16:05.984032 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-dcfbdc547-54spv" Feb 17 16:16:06 crc kubenswrapper[4808]: I0217 16:16:06.888955 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerStarted","Data":"e2ccf9ff3f670d7de30bfa9163b03233d4d4a71f4581fbec22a47c8d402ebd58"} Feb 17 16:16:06 crc kubenswrapper[4808]: I0217 16:16:06.889339 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="proxy-httpd" containerID="cri-o://e2ccf9ff3f670d7de30bfa9163b03233d4d4a71f4581fbec22a47c8d402ebd58" gracePeriod=30 Feb 17 16:16:06 crc kubenswrapper[4808]: I0217 16:16:06.889270 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-central-agent" containerID="cri-o://26452d6ca1aa9de491489e0904eac549f1df8fca08d5c4e57d5f1ca767c331fd" gracePeriod=30 Feb 17 16:16:06 crc kubenswrapper[4808]: I0217 16:16:06.889343 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="sg-core" containerID="cri-o://aae377a74573763676b86b70c1c3f0564761605238764edc050e4bcbb700450d" gracePeriod=30 Feb 17 16:16:06 crc kubenswrapper[4808]: I0217 16:16:06.889374 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-notification-agent" containerID="cri-o://0859f5931b4f6911204f39fb8dca910ef06274861a3a534de924c3a3792b5888" gracePeriod=30 Feb 17 16:16:06 crc kubenswrapper[4808]: I0217 16:16:06.919158 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.692136971 podStartE2EDuration="10.919143108s" podCreationTimestamp="2026-02-17 16:15:56 +0000 UTC" firstStartedPulling="2026-02-17 16:15:57.703397649 +0000 UTC m=+1321.219756722" lastFinishedPulling="2026-02-17 16:16:05.930403786 +0000 UTC m=+1329.446762859" observedRunningTime="2026-02-17 16:16:06.917396411 +0000 UTC m=+1330.433755484" watchObservedRunningTime="2026-02-17 16:16:06.919143108 +0000 UTC m=+1330.435502181" Feb 17 16:16:07 crc kubenswrapper[4808]: I0217 16:16:07.209843 4808 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod37da8fa5-9dda-4e98-9a63-a4c0036e0017"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod37da8fa5-9dda-4e98-9a63-a4c0036e0017] : Timed out while waiting for systemd to remove kubepods-besteffort-pod37da8fa5_9dda_4e98_9a63_a4c0036e0017.slice" Feb 17 16:16:07 crc kubenswrapper[4808]: I0217 16:16:07.900398 4808 generic.go:334] "Generic (PLEG): container finished" podID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerID="e2ccf9ff3f670d7de30bfa9163b03233d4d4a71f4581fbec22a47c8d402ebd58" exitCode=0 Feb 17 16:16:07 crc kubenswrapper[4808]: I0217 16:16:07.900428 4808 generic.go:334] "Generic (PLEG): container finished" podID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerID="aae377a74573763676b86b70c1c3f0564761605238764edc050e4bcbb700450d" exitCode=2 Feb 17 16:16:07 crc kubenswrapper[4808]: I0217 16:16:07.900459 4808 generic.go:334] "Generic (PLEG): container finished" podID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerID="0859f5931b4f6911204f39fb8dca910ef06274861a3a534de924c3a3792b5888" exitCode=0 Feb 17 16:16:07 crc kubenswrapper[4808]: I0217 16:16:07.900427 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerDied","Data":"e2ccf9ff3f670d7de30bfa9163b03233d4d4a71f4581fbec22a47c8d402ebd58"} Feb 17 16:16:07 crc kubenswrapper[4808]: I0217 16:16:07.900489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerDied","Data":"aae377a74573763676b86b70c1c3f0564761605238764edc050e4bcbb700450d"} Feb 17 16:16:07 crc kubenswrapper[4808]: I0217 16:16:07.900500 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerDied","Data":"0859f5931b4f6911204f39fb8dca910ef06274861a3a534de924c3a3792b5888"} Feb 17 16:16:09 crc kubenswrapper[4808]: I0217 16:16:09.933891 4808 generic.go:334] "Generic (PLEG): container finished" podID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerID="26452d6ca1aa9de491489e0904eac549f1df8fca08d5c4e57d5f1ca767c331fd" exitCode=0 Feb 17 16:16:09 crc kubenswrapper[4808]: I0217 16:16:09.933991 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerDied","Data":"26452d6ca1aa9de491489e0904eac549f1df8fca08d5c4e57d5f1ca767c331fd"} Feb 17 16:16:13 crc kubenswrapper[4808]: I0217 16:16:13.188198 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.175082 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.305344 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-log-httpd\") pod \"b26053b6-532d-42e0-84a8-9ad29e1168d3\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.305402 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-scripts\") pod \"b26053b6-532d-42e0-84a8-9ad29e1168d3\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.305785 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b26053b6-532d-42e0-84a8-9ad29e1168d3" (UID: "b26053b6-532d-42e0-84a8-9ad29e1168d3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.306266 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-config-data\") pod \"b26053b6-532d-42e0-84a8-9ad29e1168d3\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.306350 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-run-httpd\") pod \"b26053b6-532d-42e0-84a8-9ad29e1168d3\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.306372 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-sg-core-conf-yaml\") pod \"b26053b6-532d-42e0-84a8-9ad29e1168d3\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.306407 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc4cr\" (UniqueName: \"kubernetes.io/projected/b26053b6-532d-42e0-84a8-9ad29e1168d3-kube-api-access-wc4cr\") pod \"b26053b6-532d-42e0-84a8-9ad29e1168d3\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.306496 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-combined-ca-bundle\") pod \"b26053b6-532d-42e0-84a8-9ad29e1168d3\" (UID: \"b26053b6-532d-42e0-84a8-9ad29e1168d3\") " Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.306675 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b26053b6-532d-42e0-84a8-9ad29e1168d3" (UID: "b26053b6-532d-42e0-84a8-9ad29e1168d3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.307494 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.307519 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b26053b6-532d-42e0-84a8-9ad29e1168d3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.310831 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26053b6-532d-42e0-84a8-9ad29e1168d3-kube-api-access-wc4cr" (OuterVolumeSpecName: "kube-api-access-wc4cr") pod "b26053b6-532d-42e0-84a8-9ad29e1168d3" (UID: "b26053b6-532d-42e0-84a8-9ad29e1168d3"). InnerVolumeSpecName "kube-api-access-wc4cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.316798 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-scripts" (OuterVolumeSpecName: "scripts") pod "b26053b6-532d-42e0-84a8-9ad29e1168d3" (UID: "b26053b6-532d-42e0-84a8-9ad29e1168d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.340528 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b26053b6-532d-42e0-84a8-9ad29e1168d3" (UID: "b26053b6-532d-42e0-84a8-9ad29e1168d3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.403328 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b26053b6-532d-42e0-84a8-9ad29e1168d3" (UID: "b26053b6-532d-42e0-84a8-9ad29e1168d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.410044 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.410164 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc4cr\" (UniqueName: \"kubernetes.io/projected/b26053b6-532d-42e0-84a8-9ad29e1168d3-kube-api-access-wc4cr\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.410256 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.410323 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.413949 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-config-data" (OuterVolumeSpecName: "config-data") pod "b26053b6-532d-42e0-84a8-9ad29e1168d3" (UID: "b26053b6-532d-42e0-84a8-9ad29e1168d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.511961 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b26053b6-532d-42e0-84a8-9ad29e1168d3-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.986021 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" event={"ID":"a276997e-b8ab-4b5a-ac5f-c21a8114d673","Type":"ContainerStarted","Data":"03dd27d0072c98b182eebc081f82c18296cd4cef8a9626830d097fc0caa3a09f"} Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.991634 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b26053b6-532d-42e0-84a8-9ad29e1168d3","Type":"ContainerDied","Data":"a81d691f61912aaa98c6eb558cf89221dca2d88f6d8316dfd3364666d1a3bef8"} Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.991750 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:14 crc kubenswrapper[4808]: I0217 16:16:14.991853 4808 scope.go:117] "RemoveContainer" containerID="e2ccf9ff3f670d7de30bfa9163b03233d4d4a71f4581fbec22a47c8d402ebd58" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.005961 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" podStartSLOduration=2.251788694 podStartE2EDuration="13.005945489s" podCreationTimestamp="2026-02-17 16:16:02 +0000 UTC" firstStartedPulling="2026-02-17 16:16:03.029026694 +0000 UTC m=+1326.545385767" lastFinishedPulling="2026-02-17 16:16:13.783183489 +0000 UTC m=+1337.299542562" observedRunningTime="2026-02-17 16:16:15.004223882 +0000 UTC m=+1338.520582955" watchObservedRunningTime="2026-02-17 16:16:15.005945489 +0000 UTC m=+1338.522304562" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.019112 4808 scope.go:117] "RemoveContainer" containerID="aae377a74573763676b86b70c1c3f0564761605238764edc050e4bcbb700450d" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.042854 4808 scope.go:117] "RemoveContainer" containerID="0859f5931b4f6911204f39fb8dca910ef06274861a3a534de924c3a3792b5888" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.048645 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.059670 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.070274 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:15 crc kubenswrapper[4808]: E0217 16:16:15.070794 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-notification-agent" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.070817 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-notification-agent" Feb 17 16:16:15 crc kubenswrapper[4808]: E0217 16:16:15.070851 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-central-agent" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.070862 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-central-agent" Feb 17 16:16:15 crc kubenswrapper[4808]: E0217 16:16:15.070876 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="proxy-httpd" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.070884 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="proxy-httpd" Feb 17 16:16:15 crc kubenswrapper[4808]: E0217 16:16:15.070916 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="sg-core" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.070925 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="sg-core" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.071146 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="proxy-httpd" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.071176 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-central-agent" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.071193 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="sg-core" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.071205 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" containerName="ceilometer-notification-agent" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.073336 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.076494 4808 scope.go:117] "RemoveContainer" containerID="26452d6ca1aa9de491489e0904eac549f1df8fca08d5c4e57d5f1ca767c331fd" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.078816 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.079210 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.088710 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.158087 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b26053b6-532d-42e0-84a8-9ad29e1168d3" path="/var/lib/kubelet/pods/b26053b6-532d-42e0-84a8-9ad29e1168d3/volumes" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.226238 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-config-data\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.226329 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.226524 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-log-httpd\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.226637 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-run-httpd\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.226739 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.226888 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8blx\" (UniqueName: \"kubernetes.io/projected/c97f3908-a38c-4f62-ace9-1071eb7f8d55-kube-api-access-k8blx\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.227104 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-scripts\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.328250 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-config-data\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.328317 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.328345 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-log-httpd\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.328993 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-run-httpd\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.329033 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.329067 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8blx\" (UniqueName: \"kubernetes.io/projected/c97f3908-a38c-4f62-ace9-1071eb7f8d55-kube-api-access-k8blx\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.329128 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-scripts\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.329736 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-log-httpd\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.331672 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-run-httpd\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.332479 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-scripts\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.332877 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.334525 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-config-data\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.348087 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.356142 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8blx\" (UniqueName: \"kubernetes.io/projected/c97f3908-a38c-4f62-ace9-1071eb7f8d55-kube-api-access-k8blx\") pod \"ceilometer-0\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.390075 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:15 crc kubenswrapper[4808]: I0217 16:16:15.875407 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:16 crc kubenswrapper[4808]: I0217 16:16:16.004471 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerStarted","Data":"b85ba2e2aadf05c8a92885adbf2c7f51e6f51c7f11cdad1a0c73632146a66e50"} Feb 17 16:16:17 crc kubenswrapper[4808]: I0217 16:16:17.016564 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerStarted","Data":"301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5"} Feb 17 16:16:17 crc kubenswrapper[4808]: I0217 16:16:17.246450 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.030294 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.030717 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-httpd" containerID="cri-o://ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5" gracePeriod=30 Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.030665 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-log" containerID="cri-o://ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f" gracePeriod=30 Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.041508 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerStarted","Data":"112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2"} Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.041552 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerStarted","Data":"271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8"} Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.980850 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.981353 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-httpd" containerID="cri-o://177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e" gracePeriod=30 Feb 17 16:16:18 crc kubenswrapper[4808]: I0217 16:16:18.982461 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-log" containerID="cri-o://93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674" gracePeriod=30 Feb 17 16:16:19 crc kubenswrapper[4808]: I0217 16:16:19.052376 4808 generic.go:334] "Generic (PLEG): container finished" podID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerID="ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f" exitCode=143 Feb 17 16:16:19 crc kubenswrapper[4808]: I0217 16:16:19.052417 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"311ff62c-be53-44b9-a2f7-933e94d8dfb1","Type":"ContainerDied","Data":"ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f"} Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.064373 4808 generic.go:334] "Generic (PLEG): container finished" podID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerID="93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674" exitCode=143 Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.064443 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0","Type":"ContainerDied","Data":"93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674"} Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.067554 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerStarted","Data":"d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83"} Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.067785 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-central-agent" containerID="cri-o://301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5" gracePeriod=30 Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.067918 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-notification-agent" containerID="cri-o://271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8" gracePeriod=30 Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.067840 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="proxy-httpd" containerID="cri-o://d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83" gracePeriod=30 Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.067820 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.067876 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="sg-core" containerID="cri-o://112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2" gracePeriod=30 Feb 17 16:16:20 crc kubenswrapper[4808]: I0217 16:16:20.099395 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.688753285 podStartE2EDuration="5.099371446s" podCreationTimestamp="2026-02-17 16:16:15 +0000 UTC" firstStartedPulling="2026-02-17 16:16:15.889485063 +0000 UTC m=+1339.405844136" lastFinishedPulling="2026-02-17 16:16:19.300103224 +0000 UTC m=+1342.816462297" observedRunningTime="2026-02-17 16:16:20.089297403 +0000 UTC m=+1343.605656476" watchObservedRunningTime="2026-02-17 16:16:20.099371446 +0000 UTC m=+1343.615730529" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.076978 4808 generic.go:334] "Generic (PLEG): container finished" podID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerID="d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83" exitCode=0 Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.078021 4808 generic.go:334] "Generic (PLEG): container finished" podID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerID="112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2" exitCode=2 Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.078145 4808 generic.go:334] "Generic (PLEG): container finished" podID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerID="271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8" exitCode=0 Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.077049 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerDied","Data":"d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83"} Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.078334 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerDied","Data":"112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2"} Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.078426 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerDied","Data":"271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8"} Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.592273 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.592663 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.592709 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.593332 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34e69d9ce6b54cc95e099ff98c49ef8661be9798a1b5f5a56fc276247e76ba49"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.593400 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://34e69d9ce6b54cc95e099ff98c49ef8661be9798a1b5f5a56fc276247e76ba49" gracePeriod=600 Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.774551 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859138 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2l72\" (UniqueName: \"kubernetes.io/projected/311ff62c-be53-44b9-a2f7-933e94d8dfb1-kube-api-access-v2l72\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859248 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-httpd-run\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859300 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-public-tls-certs\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859391 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-config-data\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859449 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-combined-ca-bundle\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859489 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-logs\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859508 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-scripts\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859667 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\" (UID: \"311ff62c-be53-44b9-a2f7-933e94d8dfb1\") " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.859912 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.860496 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.860615 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-logs" (OuterVolumeSpecName: "logs") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.866214 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/311ff62c-be53-44b9-a2f7-933e94d8dfb1-kube-api-access-v2l72" (OuterVolumeSpecName: "kube-api-access-v2l72") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "kube-api-access-v2l72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.875231 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-scripts" (OuterVolumeSpecName: "scripts") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.896597 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522" (OuterVolumeSpecName: "glance") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "pvc-2d669ca1-f580-41d6-88d3-29cb32d20522". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.921452 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.921595 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.962594 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2l72\" (UniqueName: \"kubernetes.io/projected/311ff62c-be53-44b9-a2f7-933e94d8dfb1-kube-api-access-v2l72\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.962919 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.962931 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.962945 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/311ff62c-be53-44b9-a2f7-933e94d8dfb1-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.962956 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.962991 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") on node \"crc\" " Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.976858 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-config-data" (OuterVolumeSpecName: "config-data") pod "311ff62c-be53-44b9-a2f7-933e94d8dfb1" (UID: "311ff62c-be53-44b9-a2f7-933e94d8dfb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.995274 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:16:21 crc kubenswrapper[4808]: I0217 16:16:21.995430 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-2d669ca1-f580-41d6-88d3-29cb32d20522" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522") on node "crc" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.064588 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/311ff62c-be53-44b9-a2f7-933e94d8dfb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.064617 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.091178 4808 generic.go:334] "Generic (PLEG): container finished" podID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerID="ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5" exitCode=0 Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.091225 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"311ff62c-be53-44b9-a2f7-933e94d8dfb1","Type":"ContainerDied","Data":"ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5"} Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.091273 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"311ff62c-be53-44b9-a2f7-933e94d8dfb1","Type":"ContainerDied","Data":"5259b7f9e5eb8d16dd9b6467f0a2e9d1eee838ac2578fd7225262f0187ce85fa"} Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.091292 4808 scope.go:117] "RemoveContainer" containerID="ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.091254 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.099195 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="34e69d9ce6b54cc95e099ff98c49ef8661be9798a1b5f5a56fc276247e76ba49" exitCode=0 Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.099296 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"34e69d9ce6b54cc95e099ff98c49ef8661be9798a1b5f5a56fc276247e76ba49"} Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.099488 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d"} Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.114598 4808 scope.go:117] "RemoveContainer" containerID="ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.139894 4808 scope.go:117] "RemoveContainer" containerID="ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5" Feb 17 16:16:22 crc kubenswrapper[4808]: E0217 16:16:22.140301 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5\": container with ID starting with ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5 not found: ID does not exist" containerID="ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.140338 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5"} err="failed to get container status \"ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5\": rpc error: code = NotFound desc = could not find container \"ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5\": container with ID starting with ff2f31bf8a59a9020889f1060c244d02f3cdf820c32dde20eee91d0b4e8e88f5 not found: ID does not exist" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.140365 4808 scope.go:117] "RemoveContainer" containerID="ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f" Feb 17 16:16:22 crc kubenswrapper[4808]: E0217 16:16:22.142300 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f\": container with ID starting with ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f not found: ID does not exist" containerID="ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.142469 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f"} err="failed to get container status \"ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f\": rpc error: code = NotFound desc = could not find container \"ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f\": container with ID starting with ae6f17f8e667309ba204350d8bb1c7687a14a6c30d1d2913b4f840091857035f not found: ID does not exist" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.142555 4808 scope.go:117] "RemoveContainer" containerID="12b4e957316b11ee081f9acecacedfdbabeee0248dc83ade7fe5f8b084a798ba" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.165265 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.212640 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.229647 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:16:22 crc kubenswrapper[4808]: E0217 16:16:22.230422 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-httpd" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.230545 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-httpd" Feb 17 16:16:22 crc kubenswrapper[4808]: E0217 16:16:22.230683 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-log" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.230767 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-log" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.231086 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-httpd" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.231164 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" containerName="glance-log" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.232631 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.235203 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.239442 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.244970 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.370200 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.370258 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.370713 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6q9x\" (UniqueName: \"kubernetes.io/projected/d5dbe689-5e11-4832-84c8-d603c08a23e2-kube-api-access-q6q9x\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.370856 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5dbe689-5e11-4832-84c8-d603c08a23e2-logs\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.370904 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.370979 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.371035 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.371067 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5dbe689-5e11-4832-84c8-d603c08a23e2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472340 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472731 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6q9x\" (UniqueName: \"kubernetes.io/projected/d5dbe689-5e11-4832-84c8-d603c08a23e2-kube-api-access-q6q9x\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472780 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5dbe689-5e11-4832-84c8-d603c08a23e2-logs\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472803 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472832 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472856 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472878 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5dbe689-5e11-4832-84c8-d603c08a23e2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.472925 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.474106 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5dbe689-5e11-4832-84c8-d603c08a23e2-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.474309 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5dbe689-5e11-4832-84c8-d603c08a23e2-logs\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.480897 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.481729 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.481765 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/793125420e976eb43638bc1f8c10c1dbf19200ea40f241dea1aa3deff96042e8/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.486273 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.492385 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-scripts\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.497733 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5dbe689-5e11-4832-84c8-d603c08a23e2-config-data\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.511922 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6q9x\" (UniqueName: \"kubernetes.io/projected/d5dbe689-5e11-4832-84c8-d603c08a23e2-kube-api-access-q6q9x\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.573868 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d669ca1-f580-41d6-88d3-29cb32d20522\") pod \"glance-default-external-api-0\" (UID: \"d5dbe689-5e11-4832-84c8-d603c08a23e2\") " pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.854712 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 17 16:16:22 crc kubenswrapper[4808]: I0217 16:16:22.959590 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.112078 4808 generic.go:334] "Generic (PLEG): container finished" podID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerID="177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e" exitCode=0 Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.112122 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0","Type":"ContainerDied","Data":"177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e"} Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.112437 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0","Type":"ContainerDied","Data":"674bc197545e528a3fae6a8ee441743eba630fd0f6cf0ca9277898370f13b963"} Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.112149 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.112458 4808 scope.go:117] "RemoveContainer" containerID="177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114182 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-httpd-run\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114304 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-scripts\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114328 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-logs\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114356 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-internal-tls-certs\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114382 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-config-data\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114420 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wngfm\" (UniqueName: \"kubernetes.io/projected/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-kube-api-access-wngfm\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114654 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.114714 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-combined-ca-bundle\") pod \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\" (UID: \"a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0\") " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.118529 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-logs" (OuterVolumeSpecName: "logs") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.120114 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.134834 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-scripts" (OuterVolumeSpecName: "scripts") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.134918 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-kube-api-access-wngfm" (OuterVolumeSpecName: "kube-api-access-wngfm") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "kube-api-access-wngfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.136892 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c" (OuterVolumeSpecName: "glance") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.140099 4808 scope.go:117] "RemoveContainer" containerID="93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.174092 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="311ff62c-be53-44b9-a2f7-933e94d8dfb1" path="/var/lib/kubelet/pods/311ff62c-be53-44b9-a2f7-933e94d8dfb1/volumes" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.175192 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.184753 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.193103 4808 scope.go:117] "RemoveContainer" containerID="177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e" Feb 17 16:16:23 crc kubenswrapper[4808]: E0217 16:16:23.200472 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e\": container with ID starting with 177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e not found: ID does not exist" containerID="177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.200515 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e"} err="failed to get container status \"177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e\": rpc error: code = NotFound desc = could not find container \"177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e\": container with ID starting with 177996b4a729c403d13937849e62a1c2bc6f990a64abe1437c1ef760ae1c250e not found: ID does not exist" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.200562 4808 scope.go:117] "RemoveContainer" containerID="93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674" Feb 17 16:16:23 crc kubenswrapper[4808]: E0217 16:16:23.201257 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674\": container with ID starting with 93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674 not found: ID does not exist" containerID="93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.201294 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674"} err="failed to get container status \"93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674\": rpc error: code = NotFound desc = could not find container \"93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674\": container with ID starting with 93b27ef0402c822c4382b1631c2f850f5ab2be4020697d343106fc4f85f7b674 not found: ID does not exist" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.215041 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-config-data" (OuterVolumeSpecName: "config-data") pod "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" (UID: "a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.216613 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.216635 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.216645 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.219914 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.219958 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wngfm\" (UniqueName: \"kubernetes.io/projected/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-kube-api-access-wngfm\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.219997 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") on node \"crc\" " Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.220012 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.220024 4808 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.257484 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.258038 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c") on node "crc" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.322243 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.445210 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.457174 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.469930 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:16:23 crc kubenswrapper[4808]: E0217 16:16:23.470506 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-httpd" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.470524 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-httpd" Feb 17 16:16:23 crc kubenswrapper[4808]: E0217 16:16:23.470589 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-log" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.470597 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-log" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.470934 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-log" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.470964 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" containerName="glance-httpd" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.473483 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.475950 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.476124 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.488281 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.499891 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 17 16:16:23 crc kubenswrapper[4808]: W0217 16:16:23.509137 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5dbe689_5e11_4832_84c8_d603c08a23e2.slice/crio-3c1154a88259d7c5533a0bfb92c0746de5fcbd416c6a484170a1f54c17bf6550 WatchSource:0}: Error finding container 3c1154a88259d7c5533a0bfb92c0746de5fcbd416c6a484170a1f54c17bf6550: Status 404 returned error can't find the container with id 3c1154a88259d7c5533a0bfb92c0746de5fcbd416c6a484170a1f54c17bf6550 Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.630894 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.630950 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b59528d2-0bad-4c66-9971-222dcaf72184-logs\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.631107 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.631191 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkjjq\" (UniqueName: \"kubernetes.io/projected/b59528d2-0bad-4c66-9971-222dcaf72184-kube-api-access-dkjjq\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.631230 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.631290 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b59528d2-0bad-4c66-9971-222dcaf72184-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.631326 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.631349 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.732765 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.732837 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkjjq\" (UniqueName: \"kubernetes.io/projected/b59528d2-0bad-4c66-9971-222dcaf72184-kube-api-access-dkjjq\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.732867 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.732910 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b59528d2-0bad-4c66-9971-222dcaf72184-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.732932 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.732951 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.733005 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.733029 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b59528d2-0bad-4c66-9971-222dcaf72184-logs\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.733944 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b59528d2-0bad-4c66-9971-222dcaf72184-logs\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.734029 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b59528d2-0bad-4c66-9971-222dcaf72184-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.739061 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.739148 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/babb0a58e49abb7abbb526a723d7265132519584485959e000cf4b8b02c96a84/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.742364 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.742669 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.744970 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.749458 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b59528d2-0bad-4c66-9971-222dcaf72184-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.753843 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkjjq\" (UniqueName: \"kubernetes.io/projected/b59528d2-0bad-4c66-9971-222dcaf72184-kube-api-access-dkjjq\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:23 crc kubenswrapper[4808]: I0217 16:16:23.808243 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cde2fba9-8f9b-406e-abc6-bd786e0adb3c\") pod \"glance-default-internal-api-0\" (UID: \"b59528d2-0bad-4c66-9971-222dcaf72184\") " pod="openstack/glance-default-internal-api-0" Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.088439 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.169424 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5dbe689-5e11-4832-84c8-d603c08a23e2","Type":"ContainerStarted","Data":"3c1154a88259d7c5533a0bfb92c0746de5fcbd416c6a484170a1f54c17bf6550"} Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.675699 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 17 16:16:24 crc kubenswrapper[4808]: W0217 16:16:24.686372 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb59528d2_0bad_4c66_9971_222dcaf72184.slice/crio-597a3ac682e1224c10a08395fef8c338c5adecba0115cc547b97371502dc6e4b WatchSource:0}: Error finding container 597a3ac682e1224c10a08395fef8c338c5adecba0115cc547b97371502dc6e4b: Status 404 returned error can't find the container with id 597a3ac682e1224c10a08395fef8c338c5adecba0115cc547b97371502dc6e4b Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.889908 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.967016 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-scripts\") pod \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.967103 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-config-data\") pod \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.967139 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-sg-core-conf-yaml\") pod \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.967172 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-combined-ca-bundle\") pod \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.967214 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-log-httpd\") pod \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.967324 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-run-httpd\") pod \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.967393 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8blx\" (UniqueName: \"kubernetes.io/projected/c97f3908-a38c-4f62-ace9-1071eb7f8d55-kube-api-access-k8blx\") pod \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\" (UID: \"c97f3908-a38c-4f62-ace9-1071eb7f8d55\") " Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.970973 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c97f3908-a38c-4f62-ace9-1071eb7f8d55" (UID: "c97f3908-a38c-4f62-ace9-1071eb7f8d55"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.971241 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c97f3908-a38c-4f62-ace9-1071eb7f8d55" (UID: "c97f3908-a38c-4f62-ace9-1071eb7f8d55"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.974172 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-scripts" (OuterVolumeSpecName: "scripts") pod "c97f3908-a38c-4f62-ace9-1071eb7f8d55" (UID: "c97f3908-a38c-4f62-ace9-1071eb7f8d55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:24 crc kubenswrapper[4808]: I0217 16:16:24.983461 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c97f3908-a38c-4f62-ace9-1071eb7f8d55-kube-api-access-k8blx" (OuterVolumeSpecName: "kube-api-access-k8blx") pod "c97f3908-a38c-4f62-ace9-1071eb7f8d55" (UID: "c97f3908-a38c-4f62-ace9-1071eb7f8d55"). InnerVolumeSpecName "kube-api-access-k8blx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.005361 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c97f3908-a38c-4f62-ace9-1071eb7f8d55" (UID: "c97f3908-a38c-4f62-ace9-1071eb7f8d55"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.070106 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.070863 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.071012 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c97f3908-a38c-4f62-ace9-1071eb7f8d55-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.071096 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8blx\" (UniqueName: \"kubernetes.io/projected/c97f3908-a38c-4f62-ace9-1071eb7f8d55-kube-api-access-k8blx\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.071182 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.080744 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c97f3908-a38c-4f62-ace9-1071eb7f8d55" (UID: "c97f3908-a38c-4f62-ace9-1071eb7f8d55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.128222 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-config-data" (OuterVolumeSpecName: "config-data") pod "c97f3908-a38c-4f62-ace9-1071eb7f8d55" (UID: "c97f3908-a38c-4f62-ace9-1071eb7f8d55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.167690 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0" path="/var/lib/kubelet/pods/a1e93e5a-4047-4ae6-9b8f-c45afedcc6b0/volumes" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.173510 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.173552 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97f3908-a38c-4f62-ace9-1071eb7f8d55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.206740 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5dbe689-5e11-4832-84c8-d603c08a23e2","Type":"ContainerStarted","Data":"3fc0e3e9839ba6ba04d80ec65d4fefff92d9970c5ba78a504133c669ee060018"} Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.206812 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d5dbe689-5e11-4832-84c8-d603c08a23e2","Type":"ContainerStarted","Data":"68fa00e5c58a7a7daea19a1d47626e9d66f57afa40f30874855c7674e068d81f"} Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.216150 4808 generic.go:334] "Generic (PLEG): container finished" podID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerID="301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5" exitCode=0 Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.216224 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerDied","Data":"301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5"} Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.216256 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c97f3908-a38c-4f62-ace9-1071eb7f8d55","Type":"ContainerDied","Data":"b85ba2e2aadf05c8a92885adbf2c7f51e6f51c7f11cdad1a0c73632146a66e50"} Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.216277 4808 scope.go:117] "RemoveContainer" containerID="d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.216429 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.227883 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b59528d2-0bad-4c66-9971-222dcaf72184","Type":"ContainerStarted","Data":"597a3ac682e1224c10a08395fef8c338c5adecba0115cc547b97371502dc6e4b"} Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.252620 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.252402237 podStartE2EDuration="3.252402237s" podCreationTimestamp="2026-02-17 16:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:25.228722716 +0000 UTC m=+1348.745081789" watchObservedRunningTime="2026-02-17 16:16:25.252402237 +0000 UTC m=+1348.768761310" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.262286 4808 scope.go:117] "RemoveContainer" containerID="112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.280474 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.296680 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.310769 4808 scope.go:117] "RemoveContainer" containerID="271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.314672 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.315181 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="sg-core" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315201 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="sg-core" Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.315221 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-notification-agent" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315229 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-notification-agent" Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.315257 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="proxy-httpd" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315265 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="proxy-httpd" Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.315303 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-central-agent" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315312 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-central-agent" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315543 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-notification-agent" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315561 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="ceilometer-central-agent" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315602 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="proxy-httpd" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.315619 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" containerName="sg-core" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.317812 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.333191 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.342403 4808 scope.go:117] "RemoveContainer" containerID="301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.343480 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.343649 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.373144 4808 scope.go:117] "RemoveContainer" containerID="d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83" Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.375040 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83\": container with ID starting with d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83 not found: ID does not exist" containerID="d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.375096 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83"} err="failed to get container status \"d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83\": rpc error: code = NotFound desc = could not find container \"d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83\": container with ID starting with d147d3a774beef8d56b16073b0312fff476cdb9167202637fbefdc69afdfde83 not found: ID does not exist" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.375133 4808 scope.go:117] "RemoveContainer" containerID="112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2" Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.378010 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2\": container with ID starting with 112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2 not found: ID does not exist" containerID="112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.378086 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2"} err="failed to get container status \"112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2\": rpc error: code = NotFound desc = could not find container \"112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2\": container with ID starting with 112b761c64facadb4f5fba21c4d4dffd36bb2124f569063f0df2934df09e7fd2 not found: ID does not exist" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.378116 4808 scope.go:117] "RemoveContainer" containerID="271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8" Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.378882 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8\": container with ID starting with 271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8 not found: ID does not exist" containerID="271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.378921 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8"} err="failed to get container status \"271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8\": rpc error: code = NotFound desc = could not find container \"271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8\": container with ID starting with 271919e71f2932ffb8ee4558779cd5e9d9143c5a96f365f9eb7383d48e958de8 not found: ID does not exist" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.378942 4808 scope.go:117] "RemoveContainer" containerID="301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5" Feb 17 16:16:25 crc kubenswrapper[4808]: E0217 16:16:25.379639 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5\": container with ID starting with 301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5 not found: ID does not exist" containerID="301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.379689 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5"} err="failed to get container status \"301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5\": rpc error: code = NotFound desc = could not find container \"301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5\": container with ID starting with 301f9423e1208ffad6a659af39889617aa9a122d75c8beea860d6ea0aaa127b5 not found: ID does not exist" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.480600 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.480702 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pmph\" (UniqueName: \"kubernetes.io/projected/e8456642-c501-433c-9644-afbe5c7a43e6-kube-api-access-6pmph\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.480765 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-run-httpd\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.480803 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-log-httpd\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.480853 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-config-data\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.480885 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.481064 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-scripts\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.583430 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-config-data\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.583472 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.583508 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-scripts\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.583604 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.583677 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pmph\" (UniqueName: \"kubernetes.io/projected/e8456642-c501-433c-9644-afbe5c7a43e6-kube-api-access-6pmph\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.583736 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-run-httpd\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.583763 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-log-httpd\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.584146 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-log-httpd\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.584267 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-run-httpd\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.587220 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-scripts\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.588520 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.589257 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.589763 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-config-data\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.601164 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pmph\" (UniqueName: \"kubernetes.io/projected/e8456642-c501-433c-9644-afbe5c7a43e6-kube-api-access-6pmph\") pod \"ceilometer-0\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " pod="openstack/ceilometer-0" Feb 17 16:16:25 crc kubenswrapper[4808]: I0217 16:16:25.698285 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:26 crc kubenswrapper[4808]: I0217 16:16:26.209379 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:26 crc kubenswrapper[4808]: W0217 16:16:26.209922 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8456642_c501_433c_9644_afbe5c7a43e6.slice/crio-3a9b796e709869b9cb4799f9bb193f7ffc25705102bf28c2fde62d64ed8e86d0 WatchSource:0}: Error finding container 3a9b796e709869b9cb4799f9bb193f7ffc25705102bf28c2fde62d64ed8e86d0: Status 404 returned error can't find the container with id 3a9b796e709869b9cb4799f9bb193f7ffc25705102bf28c2fde62d64ed8e86d0 Feb 17 16:16:26 crc kubenswrapper[4808]: I0217 16:16:26.241298 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b59528d2-0bad-4c66-9971-222dcaf72184","Type":"ContainerStarted","Data":"d49d69e9af5ef0514c6116e0015e9c73bb90b2d46a58c33141c3338212974e96"} Feb 17 16:16:26 crc kubenswrapper[4808]: I0217 16:16:26.241336 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b59528d2-0bad-4c66-9971-222dcaf72184","Type":"ContainerStarted","Data":"4a0cc7af3e1540c076ba9d914c3743fc8e613a5a7782fbfcc159b718262a9a5c"} Feb 17 16:16:26 crc kubenswrapper[4808]: I0217 16:16:26.242679 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerStarted","Data":"3a9b796e709869b9cb4799f9bb193f7ffc25705102bf28c2fde62d64ed8e86d0"} Feb 17 16:16:26 crc kubenswrapper[4808]: I0217 16:16:26.245879 4808 generic.go:334] "Generic (PLEG): container finished" podID="a276997e-b8ab-4b5a-ac5f-c21a8114d673" containerID="03dd27d0072c98b182eebc081f82c18296cd4cef8a9626830d097fc0caa3a09f" exitCode=0 Feb 17 16:16:26 crc kubenswrapper[4808]: I0217 16:16:26.245962 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" event={"ID":"a276997e-b8ab-4b5a-ac5f-c21a8114d673","Type":"ContainerDied","Data":"03dd27d0072c98b182eebc081f82c18296cd4cef8a9626830d097fc0caa3a09f"} Feb 17 16:16:26 crc kubenswrapper[4808]: I0217 16:16:26.281153 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.281137613 podStartE2EDuration="3.281137613s" podCreationTimestamp="2026-02-17 16:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:26.277257157 +0000 UTC m=+1349.793616230" watchObservedRunningTime="2026-02-17 16:16:26.281137613 +0000 UTC m=+1349.797496686" Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.122853 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.160912 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c97f3908-a38c-4f62-ace9-1071eb7f8d55" path="/var/lib/kubelet/pods/c97f3908-a38c-4f62-ace9-1071eb7f8d55/volumes" Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.256755 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerStarted","Data":"3049e3aba53451516b070bb896da851c0303048da8bc21078f93399256594ef7"} Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.802208 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.925163 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fwrj\" (UniqueName: \"kubernetes.io/projected/a276997e-b8ab-4b5a-ac5f-c21a8114d673-kube-api-access-2fwrj\") pod \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.925346 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-combined-ca-bundle\") pod \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.925384 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-scripts\") pod \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.925441 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-config-data\") pod \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\" (UID: \"a276997e-b8ab-4b5a-ac5f-c21a8114d673\") " Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.933760 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-scripts" (OuterVolumeSpecName: "scripts") pod "a276997e-b8ab-4b5a-ac5f-c21a8114d673" (UID: "a276997e-b8ab-4b5a-ac5f-c21a8114d673"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.935792 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a276997e-b8ab-4b5a-ac5f-c21a8114d673-kube-api-access-2fwrj" (OuterVolumeSpecName: "kube-api-access-2fwrj") pod "a276997e-b8ab-4b5a-ac5f-c21a8114d673" (UID: "a276997e-b8ab-4b5a-ac5f-c21a8114d673"). InnerVolumeSpecName "kube-api-access-2fwrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.958777 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-config-data" (OuterVolumeSpecName: "config-data") pod "a276997e-b8ab-4b5a-ac5f-c21a8114d673" (UID: "a276997e-b8ab-4b5a-ac5f-c21a8114d673"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:27 crc kubenswrapper[4808]: I0217 16:16:27.960262 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a276997e-b8ab-4b5a-ac5f-c21a8114d673" (UID: "a276997e-b8ab-4b5a-ac5f-c21a8114d673"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.028878 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.028911 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.028921 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a276997e-b8ab-4b5a-ac5f-c21a8114d673-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.028929 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fwrj\" (UniqueName: \"kubernetes.io/projected/a276997e-b8ab-4b5a-ac5f-c21a8114d673-kube-api-access-2fwrj\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.269390 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" event={"ID":"a276997e-b8ab-4b5a-ac5f-c21a8114d673","Type":"ContainerDied","Data":"268e843d688bb610fddbc979618a94257055f1aecd4284dda615a689b1e070c5"} Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.269441 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="268e843d688bb610fddbc979618a94257055f1aecd4284dda615a689b1e070c5" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.269509 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zrx8j" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.275652 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerStarted","Data":"e388db597c5f0636fd10ad14fe6e1347ac42817400f36ec63088edd356dbf6e1"} Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.275698 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerStarted","Data":"090e70ffbb67322583c02e52f1888dc7a40cf42484f5eafcf7a974dc9ca72afc"} Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.422994 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:28 crc kubenswrapper[4808]: E0217 16:16:28.423868 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a276997e-b8ab-4b5a-ac5f-c21a8114d673" containerName="nova-cell0-conductor-db-sync" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.423890 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="a276997e-b8ab-4b5a-ac5f-c21a8114d673" containerName="nova-cell0-conductor-db-sync" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.424136 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="a276997e-b8ab-4b5a-ac5f-c21a8114d673" containerName="nova-cell0-conductor-db-sync" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.425030 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.433628 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.434290 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tcmz6" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.434309 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.539221 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7rtr\" (UniqueName: \"kubernetes.io/projected/793e01c5-a9c7-4545-8244-34a6bae837dc-kube-api-access-s7rtr\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.539386 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.539416 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.641189 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.641232 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.641351 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7rtr\" (UniqueName: \"kubernetes.io/projected/793e01c5-a9c7-4545-8244-34a6bae837dc-kube-api-access-s7rtr\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.645168 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.645386 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.673660 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7rtr\" (UniqueName: \"kubernetes.io/projected/793e01c5-a9c7-4545-8244-34a6bae837dc-kube-api-access-s7rtr\") pod \"nova-cell0-conductor-0\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:28 crc kubenswrapper[4808]: I0217 16:16:28.742680 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:29 crc kubenswrapper[4808]: I0217 16:16:29.216849 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:29 crc kubenswrapper[4808]: I0217 16:16:29.288799 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"793e01c5-a9c7-4545-8244-34a6bae837dc","Type":"ContainerStarted","Data":"7caffc2a919e783df44efecca3e4d55e23b17b2dd4860e6c42d49a0f3c69fe6a"} Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.300068 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"793e01c5-a9c7-4545-8244-34a6bae837dc","Type":"ContainerStarted","Data":"7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183"} Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.300815 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.303117 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerStarted","Data":"bcff99d6ad66596d26a49c30224a0cbca9b4294d19393339a4468e149a4865eb"} Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.303293 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-central-agent" containerID="cri-o://3049e3aba53451516b070bb896da851c0303048da8bc21078f93399256594ef7" gracePeriod=30 Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.303420 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.303479 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="proxy-httpd" containerID="cri-o://bcff99d6ad66596d26a49c30224a0cbca9b4294d19393339a4468e149a4865eb" gracePeriod=30 Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.303529 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="sg-core" containerID="cri-o://e388db597c5f0636fd10ad14fe6e1347ac42817400f36ec63088edd356dbf6e1" gracePeriod=30 Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.303603 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-notification-agent" containerID="cri-o://090e70ffbb67322583c02e52f1888dc7a40cf42484f5eafcf7a974dc9ca72afc" gracePeriod=30 Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.357693 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.172771616 podStartE2EDuration="5.357670305s" podCreationTimestamp="2026-02-17 16:16:25 +0000 UTC" firstStartedPulling="2026-02-17 16:16:26.212122374 +0000 UTC m=+1349.728481447" lastFinishedPulling="2026-02-17 16:16:29.397021043 +0000 UTC m=+1352.913380136" observedRunningTime="2026-02-17 16:16:30.350729106 +0000 UTC m=+1353.867088179" watchObservedRunningTime="2026-02-17 16:16:30.357670305 +0000 UTC m=+1353.874029388" Feb 17 16:16:30 crc kubenswrapper[4808]: I0217 16:16:30.365095 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.365075715 podStartE2EDuration="2.365075715s" podCreationTimestamp="2026-02-17 16:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:30.324145227 +0000 UTC m=+1353.840504300" watchObservedRunningTime="2026-02-17 16:16:30.365075715 +0000 UTC m=+1353.881434798" Feb 17 16:16:31 crc kubenswrapper[4808]: I0217 16:16:31.314828 4808 generic.go:334] "Generic (PLEG): container finished" podID="e8456642-c501-433c-9644-afbe5c7a43e6" containerID="bcff99d6ad66596d26a49c30224a0cbca9b4294d19393339a4468e149a4865eb" exitCode=0 Feb 17 16:16:31 crc kubenswrapper[4808]: I0217 16:16:31.315166 4808 generic.go:334] "Generic (PLEG): container finished" podID="e8456642-c501-433c-9644-afbe5c7a43e6" containerID="e388db597c5f0636fd10ad14fe6e1347ac42817400f36ec63088edd356dbf6e1" exitCode=2 Feb 17 16:16:31 crc kubenswrapper[4808]: I0217 16:16:31.315176 4808 generic.go:334] "Generic (PLEG): container finished" podID="e8456642-c501-433c-9644-afbe5c7a43e6" containerID="090e70ffbb67322583c02e52f1888dc7a40cf42484f5eafcf7a974dc9ca72afc" exitCode=0 Feb 17 16:16:31 crc kubenswrapper[4808]: I0217 16:16:31.314899 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerDied","Data":"bcff99d6ad66596d26a49c30224a0cbca9b4294d19393339a4468e149a4865eb"} Feb 17 16:16:31 crc kubenswrapper[4808]: I0217 16:16:31.315287 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerDied","Data":"e388db597c5f0636fd10ad14fe6e1347ac42817400f36ec63088edd356dbf6e1"} Feb 17 16:16:31 crc kubenswrapper[4808]: I0217 16:16:31.315301 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerDied","Data":"090e70ffbb67322583c02e52f1888dc7a40cf42484f5eafcf7a974dc9ca72afc"} Feb 17 16:16:32 crc kubenswrapper[4808]: I0217 16:16:32.855410 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:16:32 crc kubenswrapper[4808]: I0217 16:16:32.855955 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 17 16:16:32 crc kubenswrapper[4808]: I0217 16:16:32.882942 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:16:32 crc kubenswrapper[4808]: I0217 16:16:32.907141 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 17 16:16:33 crc kubenswrapper[4808]: I0217 16:16:33.341439 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:16:33 crc kubenswrapper[4808]: I0217 16:16:33.341512 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.089330 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.089401 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.126623 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.144019 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.368433 4808 generic.go:334] "Generic (PLEG): container finished" podID="e8456642-c501-433c-9644-afbe5c7a43e6" containerID="3049e3aba53451516b070bb896da851c0303048da8bc21078f93399256594ef7" exitCode=0 Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.368616 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerDied","Data":"3049e3aba53451516b070bb896da851c0303048da8bc21078f93399256594ef7"} Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.370296 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.371022 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.791041 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.892298 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-run-httpd\") pod \"e8456642-c501-433c-9644-afbe5c7a43e6\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.892885 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e8456642-c501-433c-9644-afbe5c7a43e6" (UID: "e8456642-c501-433c-9644-afbe5c7a43e6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.893212 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-scripts\") pod \"e8456642-c501-433c-9644-afbe5c7a43e6\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.893308 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-config-data\") pod \"e8456642-c501-433c-9644-afbe5c7a43e6\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.893351 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-sg-core-conf-yaml\") pod \"e8456642-c501-433c-9644-afbe5c7a43e6\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.893396 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-combined-ca-bundle\") pod \"e8456642-c501-433c-9644-afbe5c7a43e6\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.893514 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-log-httpd\") pod \"e8456642-c501-433c-9644-afbe5c7a43e6\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.893593 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pmph\" (UniqueName: \"kubernetes.io/projected/e8456642-c501-433c-9644-afbe5c7a43e6-kube-api-access-6pmph\") pod \"e8456642-c501-433c-9644-afbe5c7a43e6\" (UID: \"e8456642-c501-433c-9644-afbe5c7a43e6\") " Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.894379 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e8456642-c501-433c-9644-afbe5c7a43e6" (UID: "e8456642-c501-433c-9644-afbe5c7a43e6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.899663 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8456642-c501-433c-9644-afbe5c7a43e6-kube-api-access-6pmph" (OuterVolumeSpecName: "kube-api-access-6pmph") pod "e8456642-c501-433c-9644-afbe5c7a43e6" (UID: "e8456642-c501-433c-9644-afbe5c7a43e6"). InnerVolumeSpecName "kube-api-access-6pmph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.903863 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-scripts" (OuterVolumeSpecName: "scripts") pod "e8456642-c501-433c-9644-afbe5c7a43e6" (UID: "e8456642-c501-433c-9644-afbe5c7a43e6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.926744 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e8456642-c501-433c-9644-afbe5c7a43e6" (UID: "e8456642-c501-433c-9644-afbe5c7a43e6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.974855 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8456642-c501-433c-9644-afbe5c7a43e6" (UID: "e8456642-c501-433c-9644-afbe5c7a43e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.996526 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pmph\" (UniqueName: \"kubernetes.io/projected/e8456642-c501-433c-9644-afbe5c7a43e6-kube-api-access-6pmph\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.996588 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.996603 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.996616 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.996628 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:34 crc kubenswrapper[4808]: I0217 16:16:34.996639 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8456642-c501-433c-9644-afbe5c7a43e6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.014220 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-config-data" (OuterVolumeSpecName: "config-data") pod "e8456642-c501-433c-9644-afbe5c7a43e6" (UID: "e8456642-c501-433c-9644-afbe5c7a43e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.099439 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8456642-c501-433c-9644-afbe5c7a43e6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.170552 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.174748 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.390682 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8456642-c501-433c-9644-afbe5c7a43e6","Type":"ContainerDied","Data":"3a9b796e709869b9cb4799f9bb193f7ffc25705102bf28c2fde62d64ed8e86d0"} Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.391071 4808 scope.go:117] "RemoveContainer" containerID="bcff99d6ad66596d26a49c30224a0cbca9b4294d19393339a4468e149a4865eb" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.391136 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.429716 4808 scope.go:117] "RemoveContainer" containerID="e388db597c5f0636fd10ad14fe6e1347ac42817400f36ec63088edd356dbf6e1" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.434830 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.465669 4808 scope.go:117] "RemoveContainer" containerID="090e70ffbb67322583c02e52f1888dc7a40cf42484f5eafcf7a974dc9ca72afc" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.465796 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.498525 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:35 crc kubenswrapper[4808]: E0217 16:16:35.499152 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-central-agent" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499171 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-central-agent" Feb 17 16:16:35 crc kubenswrapper[4808]: E0217 16:16:35.499184 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="proxy-httpd" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499191 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="proxy-httpd" Feb 17 16:16:35 crc kubenswrapper[4808]: E0217 16:16:35.499201 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="sg-core" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499207 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="sg-core" Feb 17 16:16:35 crc kubenswrapper[4808]: E0217 16:16:35.499228 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-notification-agent" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499234 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-notification-agent" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499440 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="proxy-httpd" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499453 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-central-agent" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499465 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="ceilometer-notification-agent" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.499484 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" containerName="sg-core" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.503481 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.503728 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.508217 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.508482 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.532182 4808 scope.go:117] "RemoveContainer" containerID="3049e3aba53451516b070bb896da851c0303048da8bc21078f93399256594ef7" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.615978 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-scripts\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.616096 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-run-httpd\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.616135 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.616164 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-log-httpd\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.616310 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.616361 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5g5j\" (UniqueName: \"kubernetes.io/projected/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-kube-api-access-n5g5j\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.616391 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-config-data\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718064 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-run-httpd\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718124 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718158 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-log-httpd\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718261 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718312 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5g5j\" (UniqueName: \"kubernetes.io/projected/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-kube-api-access-n5g5j\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718339 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-config-data\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718422 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-scripts\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.718657 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-run-httpd\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.719560 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-log-httpd\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.722748 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.723203 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-config-data\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.723252 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.732929 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-scripts\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.733428 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5g5j\" (UniqueName: \"kubernetes.io/projected/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-kube-api-access-n5g5j\") pod \"ceilometer-0\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " pod="openstack/ceilometer-0" Feb 17 16:16:35 crc kubenswrapper[4808]: I0217 16:16:35.853127 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:36 crc kubenswrapper[4808]: I0217 16:16:36.235896 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:36 crc kubenswrapper[4808]: I0217 16:16:36.301141 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 17 16:16:36 crc kubenswrapper[4808]: I0217 16:16:36.418347 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:37 crc kubenswrapper[4808]: I0217 16:16:37.184549 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8456642-c501-433c-9644-afbe5c7a43e6" path="/var/lib/kubelet/pods/e8456642-c501-433c-9644-afbe5c7a43e6/volumes" Feb 17 16:16:37 crc kubenswrapper[4808]: I0217 16:16:37.445692 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerStarted","Data":"450253ec624601825b2ade75676906be1f978ed00a8d079f0e7831c9dab89ee3"} Feb 17 16:16:37 crc kubenswrapper[4808]: I0217 16:16:37.446038 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerStarted","Data":"91d1642df2334e4f429a191525235bf1d0f2f6da6b1932c826f1850f30b2d130"} Feb 17 16:16:38 crc kubenswrapper[4808]: I0217 16:16:38.181984 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:38 crc kubenswrapper[4808]: I0217 16:16:38.182469 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="793e01c5-a9c7-4545-8244-34a6bae837dc" containerName="nova-cell0-conductor-conductor" containerID="cri-o://7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" gracePeriod=30 Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.187488 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.188890 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.190106 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.190138 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="793e01c5-a9c7-4545-8244-34a6bae837dc" containerName="nova-cell0-conductor-conductor" Feb 17 16:16:38 crc kubenswrapper[4808]: I0217 16:16:38.459410 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerStarted","Data":"96437272da8dbbc5f00ffd256113919496f22a8bc78f00ba1c720a2e3dc2be0b"} Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.745003 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.746822 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.747719 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 17 16:16:38 crc kubenswrapper[4808]: E0217 16:16:38.747754 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="793e01c5-a9c7-4545-8244-34a6bae837dc" containerName="nova-cell0-conductor-conductor" Feb 17 16:16:39 crc kubenswrapper[4808]: I0217 16:16:39.471008 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerStarted","Data":"9cca18216dca5f726c4eff2fcf22a755d97483924e20771afa5abfba085c3a60"} Feb 17 16:16:39 crc kubenswrapper[4808]: I0217 16:16:39.727886 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:40 crc kubenswrapper[4808]: I0217 16:16:40.489798 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerStarted","Data":"3a6dfdb0ccfc744dd33488cddc605d674671cc5457e3b826471944a3b570fd00"} Feb 17 16:16:40 crc kubenswrapper[4808]: I0217 16:16:40.490211 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-notification-agent" containerID="cri-o://96437272da8dbbc5f00ffd256113919496f22a8bc78f00ba1c720a2e3dc2be0b" gracePeriod=30 Feb 17 16:16:40 crc kubenswrapper[4808]: I0217 16:16:40.490162 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="sg-core" containerID="cri-o://9cca18216dca5f726c4eff2fcf22a755d97483924e20771afa5abfba085c3a60" gracePeriod=30 Feb 17 16:16:40 crc kubenswrapper[4808]: I0217 16:16:40.490193 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="proxy-httpd" containerID="cri-o://3a6dfdb0ccfc744dd33488cddc605d674671cc5457e3b826471944a3b570fd00" gracePeriod=30 Feb 17 16:16:40 crc kubenswrapper[4808]: I0217 16:16:40.490223 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:16:40 crc kubenswrapper[4808]: I0217 16:16:40.490102 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-central-agent" containerID="cri-o://450253ec624601825b2ade75676906be1f978ed00a8d079f0e7831c9dab89ee3" gracePeriod=30 Feb 17 16:16:40 crc kubenswrapper[4808]: I0217 16:16:40.521756 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.225791879 podStartE2EDuration="5.521728496s" podCreationTimestamp="2026-02-17 16:16:35 +0000 UTC" firstStartedPulling="2026-02-17 16:16:36.437055021 +0000 UTC m=+1359.953414094" lastFinishedPulling="2026-02-17 16:16:39.732991648 +0000 UTC m=+1363.249350711" observedRunningTime="2026-02-17 16:16:40.512130802 +0000 UTC m=+1364.028489875" watchObservedRunningTime="2026-02-17 16:16:40.521728496 +0000 UTC m=+1364.038087569" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.028212 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.230393 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-combined-ca-bundle\") pod \"793e01c5-a9c7-4545-8244-34a6bae837dc\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.230750 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-config-data\") pod \"793e01c5-a9c7-4545-8244-34a6bae837dc\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.230984 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7rtr\" (UniqueName: \"kubernetes.io/projected/793e01c5-a9c7-4545-8244-34a6bae837dc-kube-api-access-s7rtr\") pod \"793e01c5-a9c7-4545-8244-34a6bae837dc\" (UID: \"793e01c5-a9c7-4545-8244-34a6bae837dc\") " Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.236208 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793e01c5-a9c7-4545-8244-34a6bae837dc-kube-api-access-s7rtr" (OuterVolumeSpecName: "kube-api-access-s7rtr") pod "793e01c5-a9c7-4545-8244-34a6bae837dc" (UID: "793e01c5-a9c7-4545-8244-34a6bae837dc"). InnerVolumeSpecName "kube-api-access-s7rtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.263788 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-config-data" (OuterVolumeSpecName: "config-data") pod "793e01c5-a9c7-4545-8244-34a6bae837dc" (UID: "793e01c5-a9c7-4545-8244-34a6bae837dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.266083 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "793e01c5-a9c7-4545-8244-34a6bae837dc" (UID: "793e01c5-a9c7-4545-8244-34a6bae837dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.333521 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.333561 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/793e01c5-a9c7-4545-8244-34a6bae837dc-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.333588 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7rtr\" (UniqueName: \"kubernetes.io/projected/793e01c5-a9c7-4545-8244-34a6bae837dc-kube-api-access-s7rtr\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.502818 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerID="3a6dfdb0ccfc744dd33488cddc605d674671cc5457e3b826471944a3b570fd00" exitCode=0 Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.502860 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerID="9cca18216dca5f726c4eff2fcf22a755d97483924e20771afa5abfba085c3a60" exitCode=2 Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.502871 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerID="96437272da8dbbc5f00ffd256113919496f22a8bc78f00ba1c720a2e3dc2be0b" exitCode=0 Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.502890 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerDied","Data":"3a6dfdb0ccfc744dd33488cddc605d674671cc5457e3b826471944a3b570fd00"} Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.502933 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerDied","Data":"9cca18216dca5f726c4eff2fcf22a755d97483924e20771afa5abfba085c3a60"} Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.502944 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerDied","Data":"96437272da8dbbc5f00ffd256113919496f22a8bc78f00ba1c720a2e3dc2be0b"} Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.505010 4808 generic.go:334] "Generic (PLEG): container finished" podID="793e01c5-a9c7-4545-8244-34a6bae837dc" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" exitCode=0 Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.505056 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.505088 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"793e01c5-a9c7-4545-8244-34a6bae837dc","Type":"ContainerDied","Data":"7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183"} Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.505142 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"793e01c5-a9c7-4545-8244-34a6bae837dc","Type":"ContainerDied","Data":"7caffc2a919e783df44efecca3e4d55e23b17b2dd4860e6c42d49a0f3c69fe6a"} Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.505172 4808 scope.go:117] "RemoveContainer" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.561173 4808 scope.go:117] "RemoveContainer" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" Feb 17 16:16:41 crc kubenswrapper[4808]: E0217 16:16:41.562781 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183\": container with ID starting with 7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183 not found: ID does not exist" containerID="7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.562837 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183"} err="failed to get container status \"7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183\": rpc error: code = NotFound desc = could not find container \"7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183\": container with ID starting with 7228d7aa3cccfbefcccd5def675e46d2b68a93553954a7160ff2f9acc2f06183 not found: ID does not exist" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.564807 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.588512 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.615651 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:41 crc kubenswrapper[4808]: E0217 16:16:41.616176 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793e01c5-a9c7-4545-8244-34a6bae837dc" containerName="nova-cell0-conductor-conductor" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.616196 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="793e01c5-a9c7-4545-8244-34a6bae837dc" containerName="nova-cell0-conductor-conductor" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.616553 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="793e01c5-a9c7-4545-8244-34a6bae837dc" containerName="nova-cell0-conductor-conductor" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.617442 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.619967 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tcmz6" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.620971 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.627037 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.743020 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.743139 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfl4m\" (UniqueName: \"kubernetes.io/projected/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-kube-api-access-lfl4m\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.743182 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.844712 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.844822 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfl4m\" (UniqueName: \"kubernetes.io/projected/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-kube-api-access-lfl4m\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.844865 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.857733 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.858246 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.883399 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfl4m\" (UniqueName: \"kubernetes.io/projected/fd596411-c54c-4a8a-9b6a-420b6ab3c9ff-kube-api-access-lfl4m\") pod \"nova-cell0-conductor-0\" (UID: \"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff\") " pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:41 crc kubenswrapper[4808]: I0217 16:16:41.943081 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:42 crc kubenswrapper[4808]: I0217 16:16:42.452211 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 17 16:16:42 crc kubenswrapper[4808]: W0217 16:16:42.465284 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd596411_c54c_4a8a_9b6a_420b6ab3c9ff.slice/crio-98cc3a2f583961a448a76b0dea95c16aaa2d1129a8b03497108f967d9102a616 WatchSource:0}: Error finding container 98cc3a2f583961a448a76b0dea95c16aaa2d1129a8b03497108f967d9102a616: Status 404 returned error can't find the container with id 98cc3a2f583961a448a76b0dea95c16aaa2d1129a8b03497108f967d9102a616 Feb 17 16:16:42 crc kubenswrapper[4808]: I0217 16:16:42.528894 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff","Type":"ContainerStarted","Data":"98cc3a2f583961a448a76b0dea95c16aaa2d1129a8b03497108f967d9102a616"} Feb 17 16:16:43 crc kubenswrapper[4808]: I0217 16:16:43.157874 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="793e01c5-a9c7-4545-8244-34a6bae837dc" path="/var/lib/kubelet/pods/793e01c5-a9c7-4545-8244-34a6bae837dc/volumes" Feb 17 16:16:43 crc kubenswrapper[4808]: I0217 16:16:43.538657 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fd596411-c54c-4a8a-9b6a-420b6ab3c9ff","Type":"ContainerStarted","Data":"6b76ca9582a3f7fa8574efca5f8781ff8022549fc27b18113ef1087c180daf14"} Feb 17 16:16:43 crc kubenswrapper[4808]: I0217 16:16:43.538820 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:43 crc kubenswrapper[4808]: I0217 16:16:43.576725 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.576708034 podStartE2EDuration="2.576708034s" podCreationTimestamp="2026-02-17 16:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:43.568093046 +0000 UTC m=+1367.084452119" watchObservedRunningTime="2026-02-17 16:16:43.576708034 +0000 UTC m=+1367.093067107" Feb 17 16:16:44 crc kubenswrapper[4808]: I0217 16:16:44.549527 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerID="450253ec624601825b2ade75676906be1f978ed00a8d079f0e7831c9dab89ee3" exitCode=0 Feb 17 16:16:44 crc kubenswrapper[4808]: I0217 16:16:44.549614 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerDied","Data":"450253ec624601825b2ade75676906be1f978ed00a8d079f0e7831c9dab89ee3"} Feb 17 16:16:44 crc kubenswrapper[4808]: I0217 16:16:44.872450 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.011764 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-config-data\") pod \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012106 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-run-httpd\") pod \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012151 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-combined-ca-bundle\") pod \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012208 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-sg-core-conf-yaml\") pod \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012236 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-log-httpd\") pod \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012260 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5g5j\" (UniqueName: \"kubernetes.io/projected/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-kube-api-access-n5g5j\") pod \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012335 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-scripts\") pod \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\" (UID: \"8d522679-0f73-4d58-b7f7-ddb835a4dbc6\") " Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012554 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8d522679-0f73-4d58-b7f7-ddb835a4dbc6" (UID: "8d522679-0f73-4d58-b7f7-ddb835a4dbc6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.012892 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8d522679-0f73-4d58-b7f7-ddb835a4dbc6" (UID: "8d522679-0f73-4d58-b7f7-ddb835a4dbc6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.013175 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.013192 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.018845 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-scripts" (OuterVolumeSpecName: "scripts") pod "8d522679-0f73-4d58-b7f7-ddb835a4dbc6" (UID: "8d522679-0f73-4d58-b7f7-ddb835a4dbc6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.022975 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-kube-api-access-n5g5j" (OuterVolumeSpecName: "kube-api-access-n5g5j") pod "8d522679-0f73-4d58-b7f7-ddb835a4dbc6" (UID: "8d522679-0f73-4d58-b7f7-ddb835a4dbc6"). InnerVolumeSpecName "kube-api-access-n5g5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.048781 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8d522679-0f73-4d58-b7f7-ddb835a4dbc6" (UID: "8d522679-0f73-4d58-b7f7-ddb835a4dbc6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.101718 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d522679-0f73-4d58-b7f7-ddb835a4dbc6" (UID: "8d522679-0f73-4d58-b7f7-ddb835a4dbc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.114928 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.114965 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.114979 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5g5j\" (UniqueName: \"kubernetes.io/projected/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-kube-api-access-n5g5j\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.114996 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.117780 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-config-data" (OuterVolumeSpecName: "config-data") pod "8d522679-0f73-4d58-b7f7-ddb835a4dbc6" (UID: "8d522679-0f73-4d58-b7f7-ddb835a4dbc6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.217015 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d522679-0f73-4d58-b7f7-ddb835a4dbc6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.561938 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d522679-0f73-4d58-b7f7-ddb835a4dbc6","Type":"ContainerDied","Data":"91d1642df2334e4f429a191525235bf1d0f2f6da6b1932c826f1850f30b2d130"} Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.561983 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.562001 4808 scope.go:117] "RemoveContainer" containerID="3a6dfdb0ccfc744dd33488cddc605d674671cc5457e3b826471944a3b570fd00" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.599606 4808 scope.go:117] "RemoveContainer" containerID="9cca18216dca5f726c4eff2fcf22a755d97483924e20771afa5abfba085c3a60" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.607535 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.635094 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.638327 4808 scope.go:117] "RemoveContainer" containerID="96437272da8dbbc5f00ffd256113919496f22a8bc78f00ba1c720a2e3dc2be0b" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645035 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:45 crc kubenswrapper[4808]: E0217 16:16:45.645442 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="sg-core" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645454 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="sg-core" Feb 17 16:16:45 crc kubenswrapper[4808]: E0217 16:16:45.645463 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-notification-agent" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645469 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-notification-agent" Feb 17 16:16:45 crc kubenswrapper[4808]: E0217 16:16:45.645482 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-central-agent" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645489 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-central-agent" Feb 17 16:16:45 crc kubenswrapper[4808]: E0217 16:16:45.645501 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="proxy-httpd" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645507 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="proxy-httpd" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645825 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-notification-agent" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645863 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="ceilometer-central-agent" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645881 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="sg-core" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.645896 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" containerName="proxy-httpd" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.648177 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.658977 4808 scope.go:117] "RemoveContainer" containerID="450253ec624601825b2ade75676906be1f978ed00a8d079f0e7831c9dab89ee3" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.668003 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.668336 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.669431 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.728502 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.728808 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.728858 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-config-data\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.728894 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-scripts\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.729018 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-run-httpd\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.729142 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-log-httpd\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.729267 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj867\" (UniqueName: \"kubernetes.io/projected/9e219b86-d82e-47f5-b071-c44ce0695362-kube-api-access-gj867\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.831522 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj867\" (UniqueName: \"kubernetes.io/projected/9e219b86-d82e-47f5-b071-c44ce0695362-kube-api-access-gj867\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.831727 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.831877 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.832613 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-config-data\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.832666 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-scripts\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.832699 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-run-httpd\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.832750 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-log-httpd\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.833173 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-run-httpd\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.833397 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-log-httpd\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.837708 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-config-data\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.837918 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-scripts\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.845390 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.852565 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.854078 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj867\" (UniqueName: \"kubernetes.io/projected/9e219b86-d82e-47f5-b071-c44ce0695362-kube-api-access-gj867\") pod \"ceilometer-0\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " pod="openstack/ceilometer-0" Feb 17 16:16:45 crc kubenswrapper[4808]: I0217 16:16:45.991163 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:16:46 crc kubenswrapper[4808]: I0217 16:16:46.475136 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:16:46 crc kubenswrapper[4808]: I0217 16:16:46.572004 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerStarted","Data":"48499d1ccd18294cde816d0461ae46337409d9b91f256c480873ba6063c87133"} Feb 17 16:16:47 crc kubenswrapper[4808]: I0217 16:16:47.160997 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d522679-0f73-4d58-b7f7-ddb835a4dbc6" path="/var/lib/kubelet/pods/8d522679-0f73-4d58-b7f7-ddb835a4dbc6/volumes" Feb 17 16:16:47 crc kubenswrapper[4808]: I0217 16:16:47.593039 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerStarted","Data":"b2074f66b52d0ee5fc07e0dd48e5b9610e713f89e070fa2279a74046e30629e5"} Feb 17 16:16:48 crc kubenswrapper[4808]: I0217 16:16:48.606967 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerStarted","Data":"14e92a83abc11738c2e58494b921f0dba3aa3b66f55a3affc10d2417c6785a90"} Feb 17 16:16:48 crc kubenswrapper[4808]: I0217 16:16:48.607323 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerStarted","Data":"8a9460318021d21a8c095dc46b0f6d2b923e1d1fb20312230919800b64c327bf"} Feb 17 16:16:50 crc kubenswrapper[4808]: I0217 16:16:50.643905 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerStarted","Data":"d73ac62ad3bfcdefb51a665f43bfa062a8308099aae6c2d45cb612f3752adbbe"} Feb 17 16:16:50 crc kubenswrapper[4808]: I0217 16:16:50.644500 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:16:50 crc kubenswrapper[4808]: I0217 16:16:50.688292 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.323931145 podStartE2EDuration="5.688272212s" podCreationTimestamp="2026-02-17 16:16:45 +0000 UTC" firstStartedPulling="2026-02-17 16:16:46.469873257 +0000 UTC m=+1369.986232340" lastFinishedPulling="2026-02-17 16:16:49.834214334 +0000 UTC m=+1373.350573407" observedRunningTime="2026-02-17 16:16:50.666600288 +0000 UTC m=+1374.182959381" watchObservedRunningTime="2026-02-17 16:16:50.688272212 +0000 UTC m=+1374.204631285" Feb 17 16:16:51 crc kubenswrapper[4808]: I0217 16:16:51.973484 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.540946 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-lhrsb"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.546153 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.548761 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.549684 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.567657 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lhrsb"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.687730 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.690253 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-scripts\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.690328 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdrx2\" (UniqueName: \"kubernetes.io/projected/3864d41e-915e-4b73-908e-c575d38863e9-kube-api-access-zdrx2\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.690398 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.690634 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.690681 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-config-data\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.692793 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.701025 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.767053 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.768696 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.771837 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.786858 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.788291 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792078 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792204 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-config-data\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792226 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-config-data\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792245 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-scripts\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49b36d0-eee7-4656-a6d8-cdf627d181b4-logs\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792296 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdrx2\" (UniqueName: \"kubernetes.io/projected/3864d41e-915e-4b73-908e-c575d38863e9-kube-api-access-zdrx2\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792342 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.792367 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnzg8\" (UniqueName: \"kubernetes.io/projected/d49b36d0-eee7-4656-a6d8-cdf627d181b4-kube-api-access-pnzg8\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.797409 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.799249 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-scripts\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.801140 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.818645 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.838305 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-config-data\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.854203 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdrx2\" (UniqueName: \"kubernetes.io/projected/3864d41e-915e-4b73-908e-c575d38863e9-kube-api-access-zdrx2\") pod \"nova-cell0-cell-mapping-lhrsb\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.869216 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.881998 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.897729 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.898006 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-config-data\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.898135 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.898267 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tml77\" (UniqueName: \"kubernetes.io/projected/67800510-1957-448c-88a1-0d2898a6524b-kube-api-access-tml77\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.898367 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.898681 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-config-data\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.898839 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49b36d0-eee7-4656-a6d8-cdf627d181b4-logs\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.898983 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.899128 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnzg8\" (UniqueName: \"kubernetes.io/projected/d49b36d0-eee7-4656-a6d8-cdf627d181b4-kube-api-access-pnzg8\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.899238 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvbd\" (UniqueName: \"kubernetes.io/projected/4b35f2cf-f95a-4467-a797-79239af955c4-kube-api-access-9nvbd\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.905562 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.907218 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49b36d0-eee7-4656-a6d8-cdf627d181b4-logs\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.913933 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-config-data\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.944031 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnzg8\" (UniqueName: \"kubernetes.io/projected/d49b36d0-eee7-4656-a6d8-cdf627d181b4-kube-api-access-pnzg8\") pod \"nova-api-0\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " pod="openstack/nova-api-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.972664 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.974322 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:16:52 crc kubenswrapper[4808]: I0217 16:16:52.981947 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.005336 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.005649 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nvbd\" (UniqueName: \"kubernetes.io/projected/4b35f2cf-f95a-4467-a797-79239af955c4-kube-api-access-9nvbd\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.005922 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-config-data\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.006043 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.006188 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tml77\" (UniqueName: \"kubernetes.io/projected/67800510-1957-448c-88a1-0d2898a6524b-kube-api-access-tml77\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.006286 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.014111 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.018739 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.021034 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.022127 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-config-data\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.039947 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.050874 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nvbd\" (UniqueName: \"kubernetes.io/projected/4b35f2cf-f95a-4467-a797-79239af955c4-kube-api-access-9nvbd\") pod \"nova-scheduler-0\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " pod="openstack/nova-scheduler-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.053530 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tml77\" (UniqueName: \"kubernetes.io/projected/67800510-1957-448c-88a1-0d2898a6524b-kube-api-access-tml77\") pod \"nova-cell1-novncproxy-0\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.066909 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.069075 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.094257 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.111197 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-config-data\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.111342 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018b3b96-1953-4437-83ab-99bc970bcd36-logs\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.111368 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb4wj\" (UniqueName: \"kubernetes.io/projected/018b3b96-1953-4437-83ab-99bc970bcd36-kube-api-access-mb4wj\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.111416 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.111523 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cd565959-ktqh6"] Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.120938 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.133963 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-ktqh6"] Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.215825 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018b3b96-1953-4437-83ab-99bc970bcd36-logs\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.215881 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb4wj\" (UniqueName: \"kubernetes.io/projected/018b3b96-1953-4437-83ab-99bc970bcd36-kube-api-access-mb4wj\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.215933 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.216007 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-config-data\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.220441 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018b3b96-1953-4437-83ab-99bc970bcd36-logs\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.243143 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb4wj\" (UniqueName: \"kubernetes.io/projected/018b3b96-1953-4437-83ab-99bc970bcd36-kube-api-access-mb4wj\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.274130 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-config-data\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.292426 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.319495 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dghr7\" (UniqueName: \"kubernetes.io/projected/17dd9003-af7c-4ead-bd8a-69dd599672e1-kube-api-access-dghr7\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.319531 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.319586 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-svc\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.319637 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-config\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.319672 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.319688 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.386097 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.422395 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dghr7\" (UniqueName: \"kubernetes.io/projected/17dd9003-af7c-4ead-bd8a-69dd599672e1-kube-api-access-dghr7\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.422455 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.422488 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-svc\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.422552 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-config\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.422622 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.422649 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.425800 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-svc\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.428221 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.428273 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.428683 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-config\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.429136 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.436979 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dghr7\" (UniqueName: \"kubernetes.io/projected/17dd9003-af7c-4ead-bd8a-69dd599672e1-kube-api-access-dghr7\") pod \"dnsmasq-dns-78cd565959-ktqh6\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.499118 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.627894 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-lhrsb"] Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.684825 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lhrsb" event={"ID":"3864d41e-915e-4b73-908e-c575d38863e9","Type":"ContainerStarted","Data":"8246e1d9e27ac063f20e993837fefe05ee7faed0616a81f38ae63adc17f5680c"} Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.776805 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46chh"] Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.778210 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.780986 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.781175 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.795436 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46chh"] Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.932443 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-config-data\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.932802 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2krh6\" (UniqueName: \"kubernetes.io/projected/8d64831b-aec0-42cd-96ec-831ec911d921-kube-api-access-2krh6\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.932956 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:53 crc kubenswrapper[4808]: I0217 16:16:53.933189 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-scripts\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.034746 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.035788 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-scripts\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.035863 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-config-data\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.035931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2krh6\" (UniqueName: \"kubernetes.io/projected/8d64831b-aec0-42cd-96ec-831ec911d921-kube-api-access-2krh6\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.040986 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-scripts\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.041118 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-config-data\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.044686 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.067146 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2krh6\" (UniqueName: \"kubernetes.io/projected/8d64831b-aec0-42cd-96ec-831ec911d921-kube-api-access-2krh6\") pod \"nova-cell1-conductor-db-sync-46chh\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.079024 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.089754 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.109376 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.127738 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-ktqh6"] Feb 17 16:16:54 crc kubenswrapper[4808]: W0217 16:16:54.133823 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17dd9003_af7c_4ead_bd8a_69dd599672e1.slice/crio-6041d8f48336fb9f3aea4819de5b72096ec393680040db5b6c883b60b9ab2c94 WatchSource:0}: Error finding container 6041d8f48336fb9f3aea4819de5b72096ec393680040db5b6c883b60b9ab2c94: Status 404 returned error can't find the container with id 6041d8f48336fb9f3aea4819de5b72096ec393680040db5b6c883b60b9ab2c94 Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.149810 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.197334 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.707922 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d49b36d0-eee7-4656-a6d8-cdf627d181b4","Type":"ContainerStarted","Data":"ee5e98cadb90446acabe123662e49b6a4cd2eca56be18b81e05b30047bcff9c1"} Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.715067 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lhrsb" event={"ID":"3864d41e-915e-4b73-908e-c575d38863e9","Type":"ContainerStarted","Data":"c7ce5a6ab108ae38e41b41038e16d03130e5c8bb91a8cb5bfd28423f0687dfdc"} Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.718679 4808 generic.go:334] "Generic (PLEG): container finished" podID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerID="3ef21441db2673d8cb4a73235d72eeb9fb765f3ab14514345fdd78ed72a42293" exitCode=0 Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.718729 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" event={"ID":"17dd9003-af7c-4ead-bd8a-69dd599672e1","Type":"ContainerDied","Data":"3ef21441db2673d8cb4a73235d72eeb9fb765f3ab14514345fdd78ed72a42293"} Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.718748 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" event={"ID":"17dd9003-af7c-4ead-bd8a-69dd599672e1","Type":"ContainerStarted","Data":"6041d8f48336fb9f3aea4819de5b72096ec393680040db5b6c883b60b9ab2c94"} Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.719538 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46chh"] Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.721720 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"67800510-1957-448c-88a1-0d2898a6524b","Type":"ContainerStarted","Data":"b5824b16acbd91bc8be7043e9329004ce8288b6bdf03b1752a9c0085eb731c99"} Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.722869 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018b3b96-1953-4437-83ab-99bc970bcd36","Type":"ContainerStarted","Data":"21c9110345aef4dc69cbeac414de965fd822d356a427b405912ce038ca889eb8"} Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.724842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b35f2cf-f95a-4467-a797-79239af955c4","Type":"ContainerStarted","Data":"70be81454915c76edbe1bd9f9a80641c32a52e8409743ccd53fcf3858d18b2d6"} Feb 17 16:16:54 crc kubenswrapper[4808]: I0217 16:16:54.735117 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-lhrsb" podStartSLOduration=2.735099446 podStartE2EDuration="2.735099446s" podCreationTimestamp="2026-02-17 16:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:54.733704918 +0000 UTC m=+1378.250083742" watchObservedRunningTime="2026-02-17 16:16:54.735099446 +0000 UTC m=+1378.251458519" Feb 17 16:16:55 crc kubenswrapper[4808]: I0217 16:16:55.743725 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-46chh" event={"ID":"8d64831b-aec0-42cd-96ec-831ec911d921","Type":"ContainerStarted","Data":"531034a194c4af62f0c8e11015f026a45e10d027a70d8384a365f5385731c096"} Feb 17 16:16:55 crc kubenswrapper[4808]: I0217 16:16:55.744079 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-46chh" event={"ID":"8d64831b-aec0-42cd-96ec-831ec911d921","Type":"ContainerStarted","Data":"2d3829a8cd87e1e7493f796b94998c113c1da2acebe2d18b959cae6d8ec1e0ba"} Feb 17 16:16:55 crc kubenswrapper[4808]: I0217 16:16:55.746557 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" event={"ID":"17dd9003-af7c-4ead-bd8a-69dd599672e1","Type":"ContainerStarted","Data":"60ea09e4f101b5eefb07143e634305b321a92f4dcd3e620b2c5a1a60a199bdae"} Feb 17 16:16:55 crc kubenswrapper[4808]: I0217 16:16:55.787812 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" podStartSLOduration=2.787791221 podStartE2EDuration="2.787791221s" podCreationTimestamp="2026-02-17 16:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:55.786550598 +0000 UTC m=+1379.302909671" watchObservedRunningTime="2026-02-17 16:16:55.787791221 +0000 UTC m=+1379.304150294" Feb 17 16:16:55 crc kubenswrapper[4808]: I0217 16:16:55.795307 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-46chh" podStartSLOduration=2.795285729 podStartE2EDuration="2.795285729s" podCreationTimestamp="2026-02-17 16:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:16:55.760207381 +0000 UTC m=+1379.276566464" watchObservedRunningTime="2026-02-17 16:16:55.795285729 +0000 UTC m=+1379.311644802" Feb 17 16:16:56 crc kubenswrapper[4808]: I0217 16:16:56.385219 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:16:56 crc kubenswrapper[4808]: I0217 16:16:56.435290 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:16:56 crc kubenswrapper[4808]: I0217 16:16:56.754378 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.765863 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018b3b96-1953-4437-83ab-99bc970bcd36","Type":"ContainerStarted","Data":"6ef8e3bebfc9cfcadeefd087d4fa6251ebd40b4d37426989452bb671f4dca959"} Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.766483 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018b3b96-1953-4437-83ab-99bc970bcd36","Type":"ContainerStarted","Data":"b61b15418b3bd37da0c8b8ccd088976fe8d71ecad15624d7a4fc984f84514eef"} Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.766134 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-metadata" containerID="cri-o://6ef8e3bebfc9cfcadeefd087d4fa6251ebd40b4d37426989452bb671f4dca959" gracePeriod=30 Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.766068 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-log" containerID="cri-o://b61b15418b3bd37da0c8b8ccd088976fe8d71ecad15624d7a4fc984f84514eef" gracePeriod=30 Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.770080 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"67800510-1957-448c-88a1-0d2898a6524b","Type":"ContainerStarted","Data":"93feefbbf60d56afc10b9bf64ecb3070c5634d6555929b547ee15577ff50a6aa"} Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.770227 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="67800510-1957-448c-88a1-0d2898a6524b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://93feefbbf60d56afc10b9bf64ecb3070c5634d6555929b547ee15577ff50a6aa" gracePeriod=30 Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.773496 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b35f2cf-f95a-4467-a797-79239af955c4","Type":"ContainerStarted","Data":"e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f"} Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.782123 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d49b36d0-eee7-4656-a6d8-cdf627d181b4","Type":"ContainerStarted","Data":"56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc"} Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.782164 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d49b36d0-eee7-4656-a6d8-cdf627d181b4","Type":"ContainerStarted","Data":"5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96"} Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.787527 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.753312787 podStartE2EDuration="5.787510685s" podCreationTimestamp="2026-02-17 16:16:52 +0000 UTC" firstStartedPulling="2026-02-17 16:16:54.141297897 +0000 UTC m=+1377.657656960" lastFinishedPulling="2026-02-17 16:16:57.175495785 +0000 UTC m=+1380.691854858" observedRunningTime="2026-02-17 16:16:57.784435883 +0000 UTC m=+1381.300794956" watchObservedRunningTime="2026-02-17 16:16:57.787510685 +0000 UTC m=+1381.303869758" Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.806192 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.72881628 podStartE2EDuration="5.806169779s" podCreationTimestamp="2026-02-17 16:16:52 +0000 UTC" firstStartedPulling="2026-02-17 16:16:54.107736279 +0000 UTC m=+1377.624095352" lastFinishedPulling="2026-02-17 16:16:57.185089778 +0000 UTC m=+1380.701448851" observedRunningTime="2026-02-17 16:16:57.805591514 +0000 UTC m=+1381.321950587" watchObservedRunningTime="2026-02-17 16:16:57.806169779 +0000 UTC m=+1381.322528852" Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.822761 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.696271637 podStartE2EDuration="5.822736107s" podCreationTimestamp="2026-02-17 16:16:52 +0000 UTC" firstStartedPulling="2026-02-17 16:16:54.049122737 +0000 UTC m=+1377.565481810" lastFinishedPulling="2026-02-17 16:16:57.175587207 +0000 UTC m=+1380.691946280" observedRunningTime="2026-02-17 16:16:57.822044359 +0000 UTC m=+1381.338403422" watchObservedRunningTime="2026-02-17 16:16:57.822736107 +0000 UTC m=+1381.339095180" Feb 17 16:16:57 crc kubenswrapper[4808]: I0217 16:16:57.851160 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.795982097 podStartE2EDuration="5.851141839s" podCreationTimestamp="2026-02-17 16:16:52 +0000 UTC" firstStartedPulling="2026-02-17 16:16:54.121967345 +0000 UTC m=+1377.638326418" lastFinishedPulling="2026-02-17 16:16:57.177127087 +0000 UTC m=+1380.693486160" observedRunningTime="2026-02-17 16:16:57.846038054 +0000 UTC m=+1381.362397127" watchObservedRunningTime="2026-02-17 16:16:57.851141839 +0000 UTC m=+1381.367500912" Feb 17 16:16:58 crc kubenswrapper[4808]: I0217 16:16:58.070382 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:16:58 crc kubenswrapper[4808]: I0217 16:16:58.095563 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:16:58 crc kubenswrapper[4808]: I0217 16:16:58.386696 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:16:58 crc kubenswrapper[4808]: I0217 16:16:58.386753 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:16:58 crc kubenswrapper[4808]: I0217 16:16:58.800507 4808 generic.go:334] "Generic (PLEG): container finished" podID="018b3b96-1953-4437-83ab-99bc970bcd36" containerID="b61b15418b3bd37da0c8b8ccd088976fe8d71ecad15624d7a4fc984f84514eef" exitCode=143 Feb 17 16:16:58 crc kubenswrapper[4808]: I0217 16:16:58.800750 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018b3b96-1953-4437-83ab-99bc970bcd36","Type":"ContainerDied","Data":"b61b15418b3bd37da0c8b8ccd088976fe8d71ecad15624d7a4fc984f84514eef"} Feb 17 16:17:01 crc kubenswrapper[4808]: I0217 16:17:01.842390 4808 generic.go:334] "Generic (PLEG): container finished" podID="8d64831b-aec0-42cd-96ec-831ec911d921" containerID="531034a194c4af62f0c8e11015f026a45e10d027a70d8384a365f5385731c096" exitCode=0 Feb 17 16:17:01 crc kubenswrapper[4808]: I0217 16:17:01.842641 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-46chh" event={"ID":"8d64831b-aec0-42cd-96ec-831ec911d921","Type":"ContainerDied","Data":"531034a194c4af62f0c8e11015f026a45e10d027a70d8384a365f5385731c096"} Feb 17 16:17:01 crc kubenswrapper[4808]: I0217 16:17:01.848073 4808 generic.go:334] "Generic (PLEG): container finished" podID="3864d41e-915e-4b73-908e-c575d38863e9" containerID="c7ce5a6ab108ae38e41b41038e16d03130e5c8bb91a8cb5bfd28423f0687dfdc" exitCode=0 Feb 17 16:17:01 crc kubenswrapper[4808]: I0217 16:17:01.848150 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lhrsb" event={"ID":"3864d41e-915e-4b73-908e-c575d38863e9","Type":"ContainerDied","Data":"c7ce5a6ab108ae38e41b41038e16d03130e5c8bb91a8cb5bfd28423f0687dfdc"} Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.015052 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.015184 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.095600 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:17:03 crc kubenswrapper[4808]: E0217 16:17:03.123960 4808 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.133149 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.500864 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.512002 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.518241 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.600354 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-786qn"] Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.600704 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67bdc55879-786qn" podUID="ef386302-14e1-4b00-b816-e85da8d23114" containerName="dnsmasq-dns" containerID="cri-o://893c1ea963c8e724fa2b9baa335921cef2a62410cb7f634726388e519c6b4a53" gracePeriod=10 Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669153 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-combined-ca-bundle\") pod \"3864d41e-915e-4b73-908e-c575d38863e9\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669305 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2krh6\" (UniqueName: \"kubernetes.io/projected/8d64831b-aec0-42cd-96ec-831ec911d921-kube-api-access-2krh6\") pod \"8d64831b-aec0-42cd-96ec-831ec911d921\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669327 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-scripts\") pod \"3864d41e-915e-4b73-908e-c575d38863e9\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669458 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdrx2\" (UniqueName: \"kubernetes.io/projected/3864d41e-915e-4b73-908e-c575d38863e9-kube-api-access-zdrx2\") pod \"3864d41e-915e-4b73-908e-c575d38863e9\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669513 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-scripts\") pod \"8d64831b-aec0-42cd-96ec-831ec911d921\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669533 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-config-data\") pod \"3864d41e-915e-4b73-908e-c575d38863e9\" (UID: \"3864d41e-915e-4b73-908e-c575d38863e9\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669585 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-combined-ca-bundle\") pod \"8d64831b-aec0-42cd-96ec-831ec911d921\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.669604 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-config-data\") pod \"8d64831b-aec0-42cd-96ec-831ec911d921\" (UID: \"8d64831b-aec0-42cd-96ec-831ec911d921\") " Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.697761 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3864d41e-915e-4b73-908e-c575d38863e9-kube-api-access-zdrx2" (OuterVolumeSpecName: "kube-api-access-zdrx2") pod "3864d41e-915e-4b73-908e-c575d38863e9" (UID: "3864d41e-915e-4b73-908e-c575d38863e9"). InnerVolumeSpecName "kube-api-access-zdrx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.709834 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d64831b-aec0-42cd-96ec-831ec911d921-kube-api-access-2krh6" (OuterVolumeSpecName: "kube-api-access-2krh6") pod "8d64831b-aec0-42cd-96ec-831ec911d921" (UID: "8d64831b-aec0-42cd-96ec-831ec911d921"). InnerVolumeSpecName "kube-api-access-2krh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.709977 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-scripts" (OuterVolumeSpecName: "scripts") pod "8d64831b-aec0-42cd-96ec-831ec911d921" (UID: "8d64831b-aec0-42cd-96ec-831ec911d921"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.730603 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-scripts" (OuterVolumeSpecName: "scripts") pod "3864d41e-915e-4b73-908e-c575d38863e9" (UID: "3864d41e-915e-4b73-908e-c575d38863e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.772077 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.772103 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2krh6\" (UniqueName: \"kubernetes.io/projected/8d64831b-aec0-42cd-96ec-831ec911d921-kube-api-access-2krh6\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.772115 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.772124 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdrx2\" (UniqueName: \"kubernetes.io/projected/3864d41e-915e-4b73-908e-c575d38863e9-kube-api-access-zdrx2\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.794712 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3864d41e-915e-4b73-908e-c575d38863e9" (UID: "3864d41e-915e-4b73-908e-c575d38863e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.822770 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-config-data" (OuterVolumeSpecName: "config-data") pod "8d64831b-aec0-42cd-96ec-831ec911d921" (UID: "8d64831b-aec0-42cd-96ec-831ec911d921"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.835077 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d64831b-aec0-42cd-96ec-831ec911d921" (UID: "8d64831b-aec0-42cd-96ec-831ec911d921"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.843401 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-config-data" (OuterVolumeSpecName: "config-data") pod "3864d41e-915e-4b73-908e-c575d38863e9" (UID: "3864d41e-915e-4b73-908e-c575d38863e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.873678 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.873722 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.873737 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d64831b-aec0-42cd-96ec-831ec911d921-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.873747 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3864d41e-915e-4b73-908e-c575d38863e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.900264 4808 generic.go:334] "Generic (PLEG): container finished" podID="ef386302-14e1-4b00-b816-e85da8d23114" containerID="893c1ea963c8e724fa2b9baa335921cef2a62410cb7f634726388e519c6b4a53" exitCode=0 Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.900342 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-786qn" event={"ID":"ef386302-14e1-4b00-b816-e85da8d23114","Type":"ContainerDied","Data":"893c1ea963c8e724fa2b9baa335921cef2a62410cb7f634726388e519c6b4a53"} Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.903622 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-46chh" event={"ID":"8d64831b-aec0-42cd-96ec-831ec911d921","Type":"ContainerDied","Data":"2d3829a8cd87e1e7493f796b94998c113c1da2acebe2d18b959cae6d8ec1e0ba"} Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.903649 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d3829a8cd87e1e7493f796b94998c113c1da2acebe2d18b959cae6d8ec1e0ba" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.903711 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-46chh" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.909162 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-lhrsb" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.912674 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-lhrsb" event={"ID":"3864d41e-915e-4b73-908e-c575d38863e9","Type":"ContainerDied","Data":"8246e1d9e27ac063f20e993837fefe05ee7faed0616a81f38ae63adc17f5680c"} Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.912724 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8246e1d9e27ac063f20e993837fefe05ee7faed0616a81f38ae63adc17f5680c" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.963134 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.964791 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67bdc55879-786qn" podUID="ef386302-14e1-4b00-b816-e85da8d23114" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: connect: connection refused" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.972566 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:17:03 crc kubenswrapper[4808]: E0217 16:17:03.973557 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3864d41e-915e-4b73-908e-c575d38863e9" containerName="nova-manage" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.973590 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3864d41e-915e-4b73-908e-c575d38863e9" containerName="nova-manage" Feb 17 16:17:03 crc kubenswrapper[4808]: E0217 16:17:03.973625 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d64831b-aec0-42cd-96ec-831ec911d921" containerName="nova-cell1-conductor-db-sync" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.973631 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d64831b-aec0-42cd-96ec-831ec911d921" containerName="nova-cell1-conductor-db-sync" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.973832 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3864d41e-915e-4b73-908e-c575d38863e9" containerName="nova-manage" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.973856 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d64831b-aec0-42cd-96ec-831ec911d921" containerName="nova-cell1-conductor-db-sync" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.975960 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.978512 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 17 16:17:03 crc kubenswrapper[4808]: I0217 16:17:03.992383 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.077422 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb56m\" (UniqueName: \"kubernetes.io/projected/1c30e340-2218-46f6-97d6-aaf96a54d84d-kube-api-access-kb56m\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.077528 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c30e340-2218-46f6-97d6-aaf96a54d84d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.077736 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c30e340-2218-46f6-97d6-aaf96a54d84d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.084056 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.084315 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-log" containerID="cri-o://5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96" gracePeriod=30 Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.084405 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-api" containerID="cri-o://56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc" gracePeriod=30 Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.089847 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.089962 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.179679 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c30e340-2218-46f6-97d6-aaf96a54d84d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.180257 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb56m\" (UniqueName: \"kubernetes.io/projected/1c30e340-2218-46f6-97d6-aaf96a54d84d-kube-api-access-kb56m\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.180342 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c30e340-2218-46f6-97d6-aaf96a54d84d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.185250 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c30e340-2218-46f6-97d6-aaf96a54d84d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.185360 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c30e340-2218-46f6-97d6-aaf96a54d84d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.197841 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb56m\" (UniqueName: \"kubernetes.io/projected/1c30e340-2218-46f6-97d6-aaf96a54d84d-kube-api-access-kb56m\") pod \"nova-cell1-conductor-0\" (UID: \"1c30e340-2218-46f6-97d6-aaf96a54d84d\") " pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.295257 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.483603 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.781367 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.896891 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-swift-storage-0\") pod \"ef386302-14e1-4b00-b816-e85da8d23114\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.896954 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-sb\") pod \"ef386302-14e1-4b00-b816-e85da8d23114\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.897018 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-svc\") pod \"ef386302-14e1-4b00-b816-e85da8d23114\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.897127 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-nb\") pod \"ef386302-14e1-4b00-b816-e85da8d23114\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.897173 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-config\") pod \"ef386302-14e1-4b00-b816-e85da8d23114\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.897261 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrdlq\" (UniqueName: \"kubernetes.io/projected/ef386302-14e1-4b00-b816-e85da8d23114-kube-api-access-zrdlq\") pod \"ef386302-14e1-4b00-b816-e85da8d23114\" (UID: \"ef386302-14e1-4b00-b816-e85da8d23114\") " Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.906446 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef386302-14e1-4b00-b816-e85da8d23114-kube-api-access-zrdlq" (OuterVolumeSpecName: "kube-api-access-zrdlq") pod "ef386302-14e1-4b00-b816-e85da8d23114" (UID: "ef386302-14e1-4b00-b816-e85da8d23114"). InnerVolumeSpecName "kube-api-access-zrdlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.907510 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.962403 4808 generic.go:334] "Generic (PLEG): container finished" podID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerID="5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96" exitCode=143 Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.962507 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d49b36d0-eee7-4656-a6d8-cdf627d181b4","Type":"ContainerDied","Data":"5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96"} Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.970886 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-786qn" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.971419 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-786qn" event={"ID":"ef386302-14e1-4b00-b816-e85da8d23114","Type":"ContainerDied","Data":"d83fa5a20f760435e6a158fc895b5bd4256f47d348c4b60bfa4934c4b8383f1a"} Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.971497 4808 scope.go:117] "RemoveContainer" containerID="893c1ea963c8e724fa2b9baa335921cef2a62410cb7f634726388e519c6b4a53" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.982239 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef386302-14e1-4b00-b816-e85da8d23114" (UID: "ef386302-14e1-4b00-b816-e85da8d23114"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.982774 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef386302-14e1-4b00-b816-e85da8d23114" (UID: "ef386302-14e1-4b00-b816-e85da8d23114"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:04 crc kubenswrapper[4808]: I0217 16:17:04.986387 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-config" (OuterVolumeSpecName: "config") pod "ef386302-14e1-4b00-b816-e85da8d23114" (UID: "ef386302-14e1-4b00-b816-e85da8d23114"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:04.998089 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ef386302-14e1-4b00-b816-e85da8d23114" (UID: "ef386302-14e1-4b00-b816-e85da8d23114"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:04.999450 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:04.999463 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrdlq\" (UniqueName: \"kubernetes.io/projected/ef386302-14e1-4b00-b816-e85da8d23114-kube-api-access-zrdlq\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:04.999475 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:04.999484 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:04.999494 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.020684 4808 scope.go:117] "RemoveContainer" containerID="76cc030230faf69f3923cb1665482598e8d9c392060ca1c1353369b5c8628b5a" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.025185 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef386302-14e1-4b00-b816-e85da8d23114" (UID: "ef386302-14e1-4b00-b816-e85da8d23114"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.101379 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef386302-14e1-4b00-b816-e85da8d23114-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.317160 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-786qn"] Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.330802 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-786qn"] Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.983979 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1c30e340-2218-46f6-97d6-aaf96a54d84d","Type":"ContainerStarted","Data":"ea3f7e40f80522c56e37edd559cc9bf1d030dcd18d47b61ac14f3758eb66a051"} Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.984307 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1c30e340-2218-46f6-97d6-aaf96a54d84d","Type":"ContainerStarted","Data":"f2a3d5bab03f7e6dd5b1dff5b8d3d24458a7173b132eada3357778ca0ca4e724"} Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.984334 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:05 crc kubenswrapper[4808]: I0217 16:17:05.986002 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4b35f2cf-f95a-4467-a797-79239af955c4" containerName="nova-scheduler-scheduler" containerID="cri-o://e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f" gracePeriod=30 Feb 17 16:17:06 crc kubenswrapper[4808]: I0217 16:17:06.014310 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.014283814 podStartE2EDuration="3.014283814s" podCreationTimestamp="2026-02-17 16:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:06.003691104 +0000 UTC m=+1389.520050177" watchObservedRunningTime="2026-02-17 16:17:06.014283814 +0000 UTC m=+1389.530642887" Feb 17 16:17:07 crc kubenswrapper[4808]: I0217 16:17:07.159443 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef386302-14e1-4b00-b816-e85da8d23114" path="/var/lib/kubelet/pods/ef386302-14e1-4b00-b816-e85da8d23114/volumes" Feb 17 16:17:08 crc kubenswrapper[4808]: E0217 16:17:08.097550 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:17:08 crc kubenswrapper[4808]: E0217 16:17:08.099125 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:17:08 crc kubenswrapper[4808]: E0217 16:17:08.101280 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:17:08 crc kubenswrapper[4808]: E0217 16:17:08.101460 4808 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4b35f2cf-f95a-4467-a797-79239af955c4" containerName="nova-scheduler-scheduler" Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.014678 4808 generic.go:334] "Generic (PLEG): container finished" podID="4b35f2cf-f95a-4467-a797-79239af955c4" containerID="e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f" exitCode=0 Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.014857 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b35f2cf-f95a-4467-a797-79239af955c4","Type":"ContainerDied","Data":"e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f"} Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.439843 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.490228 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nvbd\" (UniqueName: \"kubernetes.io/projected/4b35f2cf-f95a-4467-a797-79239af955c4-kube-api-access-9nvbd\") pod \"4b35f2cf-f95a-4467-a797-79239af955c4\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.490362 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-config-data\") pod \"4b35f2cf-f95a-4467-a797-79239af955c4\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.490481 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-combined-ca-bundle\") pod \"4b35f2cf-f95a-4467-a797-79239af955c4\" (UID: \"4b35f2cf-f95a-4467-a797-79239af955c4\") " Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.496348 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b35f2cf-f95a-4467-a797-79239af955c4-kube-api-access-9nvbd" (OuterVolumeSpecName: "kube-api-access-9nvbd") pod "4b35f2cf-f95a-4467-a797-79239af955c4" (UID: "4b35f2cf-f95a-4467-a797-79239af955c4"). InnerVolumeSpecName "kube-api-access-9nvbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.523990 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b35f2cf-f95a-4467-a797-79239af955c4" (UID: "4b35f2cf-f95a-4467-a797-79239af955c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.527049 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-config-data" (OuterVolumeSpecName: "config-data") pod "4b35f2cf-f95a-4467-a797-79239af955c4" (UID: "4b35f2cf-f95a-4467-a797-79239af955c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.593514 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.593548 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b35f2cf-f95a-4467-a797-79239af955c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:09 crc kubenswrapper[4808]: I0217 16:17:09.593559 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nvbd\" (UniqueName: \"kubernetes.io/projected/4b35f2cf-f95a-4467-a797-79239af955c4-kube-api-access-9nvbd\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.001353 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.027843 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4b35f2cf-f95a-4467-a797-79239af955c4","Type":"ContainerDied","Data":"70be81454915c76edbe1bd9f9a80641c32a52e8409743ccd53fcf3858d18b2d6"} Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.027901 4808 scope.go:117] "RemoveContainer" containerID="e515390cffb4ded639584839b29e5e7f5a819a4fb088e1aca8a2d5cd4b56159f" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.027921 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.036514 4808 generic.go:334] "Generic (PLEG): container finished" podID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerID="56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc" exitCode=0 Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.036556 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d49b36d0-eee7-4656-a6d8-cdf627d181b4","Type":"ContainerDied","Data":"56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc"} Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.036632 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.036640 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d49b36d0-eee7-4656-a6d8-cdf627d181b4","Type":"ContainerDied","Data":"ee5e98cadb90446acabe123662e49b6a4cd2eca56be18b81e05b30047bcff9c1"} Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.057114 4808 scope.go:117] "RemoveContainer" containerID="56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.080507 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.085300 4808 scope.go:117] "RemoveContainer" containerID="5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.099211 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.100038 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-config-data\") pod \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.100244 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49b36d0-eee7-4656-a6d8-cdf627d181b4-logs\") pod \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.100357 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-combined-ca-bundle\") pod \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.100562 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnzg8\" (UniqueName: \"kubernetes.io/projected/d49b36d0-eee7-4656-a6d8-cdf627d181b4-kube-api-access-pnzg8\") pod \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\" (UID: \"d49b36d0-eee7-4656-a6d8-cdf627d181b4\") " Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.101040 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49b36d0-eee7-4656-a6d8-cdf627d181b4-logs" (OuterVolumeSpecName: "logs") pod "d49b36d0-eee7-4656-a6d8-cdf627d181b4" (UID: "d49b36d0-eee7-4656-a6d8-cdf627d181b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.101837 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d49b36d0-eee7-4656-a6d8-cdf627d181b4-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.113737 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: E0217 16:17:10.114230 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef386302-14e1-4b00-b816-e85da8d23114" containerName="init" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114250 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef386302-14e1-4b00-b816-e85da8d23114" containerName="init" Feb 17 16:17:10 crc kubenswrapper[4808]: E0217 16:17:10.114275 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef386302-14e1-4b00-b816-e85da8d23114" containerName="dnsmasq-dns" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114283 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef386302-14e1-4b00-b816-e85da8d23114" containerName="dnsmasq-dns" Feb 17 16:17:10 crc kubenswrapper[4808]: E0217 16:17:10.114299 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b35f2cf-f95a-4467-a797-79239af955c4" containerName="nova-scheduler-scheduler" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114307 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b35f2cf-f95a-4467-a797-79239af955c4" containerName="nova-scheduler-scheduler" Feb 17 16:17:10 crc kubenswrapper[4808]: E0217 16:17:10.114328 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-api" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114335 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-api" Feb 17 16:17:10 crc kubenswrapper[4808]: E0217 16:17:10.114350 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-log" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114357 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-log" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114614 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b35f2cf-f95a-4467-a797-79239af955c4" containerName="nova-scheduler-scheduler" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114645 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-log" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114655 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef386302-14e1-4b00-b816-e85da8d23114" containerName="dnsmasq-dns" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.114683 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" containerName="nova-api-api" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.115206 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49b36d0-eee7-4656-a6d8-cdf627d181b4-kube-api-access-pnzg8" (OuterVolumeSpecName: "kube-api-access-pnzg8") pod "d49b36d0-eee7-4656-a6d8-cdf627d181b4" (UID: "d49b36d0-eee7-4656-a6d8-cdf627d181b4"). InnerVolumeSpecName "kube-api-access-pnzg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.120132 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.123625 4808 scope.go:117] "RemoveContainer" containerID="56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc" Feb 17 16:17:10 crc kubenswrapper[4808]: E0217 16:17:10.125160 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc\": container with ID starting with 56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc not found: ID does not exist" containerID="56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.125205 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc"} err="failed to get container status \"56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc\": rpc error: code = NotFound desc = could not find container \"56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc\": container with ID starting with 56a71b058c9c5e5186facb8c41dbcbe7e8bd3a8aec3a171c84f15d63846949cc not found: ID does not exist" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.125232 4808 scope.go:117] "RemoveContainer" containerID="5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96" Feb 17 16:17:10 crc kubenswrapper[4808]: E0217 16:17:10.126035 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96\": container with ID starting with 5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96 not found: ID does not exist" containerID="5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.126067 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96"} err="failed to get container status \"5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96\": rpc error: code = NotFound desc = could not find container \"5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96\": container with ID starting with 5b2d1102f6f02c603c50170454469936029b3e4b59fdb0bc3ba9eef7842c5f96 not found: ID does not exist" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.139251 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.141429 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.144827 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-config-data" (OuterVolumeSpecName: "config-data") pod "d49b36d0-eee7-4656-a6d8-cdf627d181b4" (UID: "d49b36d0-eee7-4656-a6d8-cdf627d181b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.174842 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d49b36d0-eee7-4656-a6d8-cdf627d181b4" (UID: "d49b36d0-eee7-4656-a6d8-cdf627d181b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.204631 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.204997 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crb6r\" (UniqueName: \"kubernetes.io/projected/c906d5a8-4187-4f58-a352-fa7faea85309-kube-api-access-crb6r\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.205092 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-config-data\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.205255 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnzg8\" (UniqueName: \"kubernetes.io/projected/d49b36d0-eee7-4656-a6d8-cdf627d181b4-kube-api-access-pnzg8\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.205277 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.205290 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d49b36d0-eee7-4656-a6d8-cdf627d181b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.306911 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.306978 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crb6r\" (UniqueName: \"kubernetes.io/projected/c906d5a8-4187-4f58-a352-fa7faea85309-kube-api-access-crb6r\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.307015 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-config-data\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.310656 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.310729 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-config-data\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.328233 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crb6r\" (UniqueName: \"kubernetes.io/projected/c906d5a8-4187-4f58-a352-fa7faea85309-kube-api-access-crb6r\") pod \"nova-scheduler-0\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.372994 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.386948 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.406048 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.407766 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.421170 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.442010 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.512130 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-config-data\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.512477 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d437b-8ce5-47ba-8fc6-9c6451caacc8-logs\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.512629 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.512709 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7629p\" (UniqueName: \"kubernetes.io/projected/646d437b-8ce5-47ba-8fc6-9c6451caacc8-kube-api-access-7629p\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.562064 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.614265 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-config-data\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.614398 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d437b-8ce5-47ba-8fc6-9c6451caacc8-logs\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.614477 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.614497 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7629p\" (UniqueName: \"kubernetes.io/projected/646d437b-8ce5-47ba-8fc6-9c6451caacc8-kube-api-access-7629p\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.615755 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d437b-8ce5-47ba-8fc6-9c6451caacc8-logs\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.618691 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-config-data\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.620201 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.634278 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7629p\" (UniqueName: \"kubernetes.io/projected/646d437b-8ce5-47ba-8fc6-9c6451caacc8-kube-api-access-7629p\") pod \"nova-api-0\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " pod="openstack/nova-api-0" Feb 17 16:17:10 crc kubenswrapper[4808]: I0217 16:17:10.742393 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:11 crc kubenswrapper[4808]: I0217 16:17:11.032955 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:11 crc kubenswrapper[4808]: W0217 16:17:11.039495 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc906d5a8_4187_4f58_a352_fa7faea85309.slice/crio-3a1dc36f880b404ebe891876f34b6e341baecb45367f34a30cd20f2687eeede8 WatchSource:0}: Error finding container 3a1dc36f880b404ebe891876f34b6e341baecb45367f34a30cd20f2687eeede8: Status 404 returned error can't find the container with id 3a1dc36f880b404ebe891876f34b6e341baecb45367f34a30cd20f2687eeede8 Feb 17 16:17:11 crc kubenswrapper[4808]: I0217 16:17:11.163135 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b35f2cf-f95a-4467-a797-79239af955c4" path="/var/lib/kubelet/pods/4b35f2cf-f95a-4467-a797-79239af955c4/volumes" Feb 17 16:17:11 crc kubenswrapper[4808]: I0217 16:17:11.164388 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49b36d0-eee7-4656-a6d8-cdf627d181b4" path="/var/lib/kubelet/pods/d49b36d0-eee7-4656-a6d8-cdf627d181b4/volumes" Feb 17 16:17:11 crc kubenswrapper[4808]: I0217 16:17:11.230868 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:12 crc kubenswrapper[4808]: I0217 16:17:12.071221 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"646d437b-8ce5-47ba-8fc6-9c6451caacc8","Type":"ContainerStarted","Data":"8ef043aeb841feb7820cafa9458135b261212780ed4c47c6422beb21b665b0f8"} Feb 17 16:17:12 crc kubenswrapper[4808]: I0217 16:17:12.071490 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"646d437b-8ce5-47ba-8fc6-9c6451caacc8","Type":"ContainerStarted","Data":"8bfe96313fc0880ba2b05de73386c3a0141557df7597d80f4ca352d193fcea90"} Feb 17 16:17:12 crc kubenswrapper[4808]: I0217 16:17:12.071499 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"646d437b-8ce5-47ba-8fc6-9c6451caacc8","Type":"ContainerStarted","Data":"98396bda825cd064a21268c85ea75ac821bba4f4fc3e844ab94ef3298d308124"} Feb 17 16:17:12 crc kubenswrapper[4808]: I0217 16:17:12.073791 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c906d5a8-4187-4f58-a352-fa7faea85309","Type":"ContainerStarted","Data":"d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372"} Feb 17 16:17:12 crc kubenswrapper[4808]: I0217 16:17:12.073809 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c906d5a8-4187-4f58-a352-fa7faea85309","Type":"ContainerStarted","Data":"3a1dc36f880b404ebe891876f34b6e341baecb45367f34a30cd20f2687eeede8"} Feb 17 16:17:12 crc kubenswrapper[4808]: I0217 16:17:12.117049 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.117025449 podStartE2EDuration="2.117025449s" podCreationTimestamp="2026-02-17 16:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:12.095566161 +0000 UTC m=+1395.611925234" watchObservedRunningTime="2026-02-17 16:17:12.117025449 +0000 UTC m=+1395.633384522" Feb 17 16:17:12 crc kubenswrapper[4808]: I0217 16:17:12.120334 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.120321256 podStartE2EDuration="2.120321256s" podCreationTimestamp="2026-02-17 16:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:12.111483252 +0000 UTC m=+1395.627842385" watchObservedRunningTime="2026-02-17 16:17:12.120321256 +0000 UTC m=+1395.636680319" Feb 17 16:17:14 crc kubenswrapper[4808]: I0217 16:17:14.332979 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 17 16:17:15 crc kubenswrapper[4808]: I0217 16:17:15.562612 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:17:16 crc kubenswrapper[4808]: I0217 16:17:16.002639 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:17:19 crc kubenswrapper[4808]: I0217 16:17:19.849865 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:17:19 crc kubenswrapper[4808]: I0217 16:17:19.851562 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" containerName="kube-state-metrics" containerID="cri-o://b8838c518fb8b535c043a526b61b1b74b26af147fff1399fef7427934840abb3" gracePeriod=30 Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.164838 4808 generic.go:334] "Generic (PLEG): container finished" podID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" containerID="b8838c518fb8b535c043a526b61b1b74b26af147fff1399fef7427934840abb3" exitCode=2 Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.164960 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a2bf674-1881-41e9-9c0f-93e8f14ac222","Type":"ContainerDied","Data":"b8838c518fb8b535c043a526b61b1b74b26af147fff1399fef7427934840abb3"} Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.452412 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.563069 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.593169 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.619349 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrnn8\" (UniqueName: \"kubernetes.io/projected/0a2bf674-1881-41e9-9c0f-93e8f14ac222-kube-api-access-jrnn8\") pod \"0a2bf674-1881-41e9-9c0f-93e8f14ac222\" (UID: \"0a2bf674-1881-41e9-9c0f-93e8f14ac222\") " Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.629483 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2bf674-1881-41e9-9c0f-93e8f14ac222-kube-api-access-jrnn8" (OuterVolumeSpecName: "kube-api-access-jrnn8") pod "0a2bf674-1881-41e9-9c0f-93e8f14ac222" (UID: "0a2bf674-1881-41e9-9c0f-93e8f14ac222"). InnerVolumeSpecName "kube-api-access-jrnn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.722428 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrnn8\" (UniqueName: \"kubernetes.io/projected/0a2bf674-1881-41e9-9c0f-93e8f14ac222-kube-api-access-jrnn8\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.744514 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:17:20 crc kubenswrapper[4808]: I0217 16:17:20.744549 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.176790 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.177432 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a2bf674-1881-41e9-9c0f-93e8f14ac222","Type":"ContainerDied","Data":"fe6c047a841d65d85a9f0e609ea1b96b4c6bc76859984c45d4fc65974fb15811"} Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.177470 4808 scope.go:117] "RemoveContainer" containerID="b8838c518fb8b535c043a526b61b1b74b26af147fff1399fef7427934840abb3" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.227101 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.241040 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.276380 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:17:21 crc kubenswrapper[4808]: E0217 16:17:21.276785 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" containerName="kube-state-metrics" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.276802 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" containerName="kube-state-metrics" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.277174 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" containerName="kube-state-metrics" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.278028 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.280935 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.281142 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.323168 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.351177 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.437245 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.437479 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.437522 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.437555 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ffdb\" (UniqueName: \"kubernetes.io/projected/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-api-access-4ffdb\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.538968 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.539040 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.539089 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ffdb\" (UniqueName: \"kubernetes.io/projected/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-api-access-4ffdb\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.539159 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.544627 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.545564 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.552161 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.580560 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ffdb\" (UniqueName: \"kubernetes.io/projected/65ea994e-22f1-4dbf-8b79-8810148fad94-kube-api-access-4ffdb\") pod \"kube-state-metrics-0\" (UID: \"65ea994e-22f1-4dbf-8b79-8810148fad94\") " pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.616886 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.786103 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:21 crc kubenswrapper[4808]: I0217 16:17:21.829953 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.220:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:22 crc kubenswrapper[4808]: I0217 16:17:22.196201 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 17 16:17:22 crc kubenswrapper[4808]: I0217 16:17:22.253535 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:22 crc kubenswrapper[4808]: I0217 16:17:22.253951 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-notification-agent" containerID="cri-o://8a9460318021d21a8c095dc46b0f6d2b923e1d1fb20312230919800b64c327bf" gracePeriod=30 Feb 17 16:17:22 crc kubenswrapper[4808]: I0217 16:17:22.253998 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="sg-core" containerID="cri-o://14e92a83abc11738c2e58494b921f0dba3aa3b66f55a3affc10d2417c6785a90" gracePeriod=30 Feb 17 16:17:22 crc kubenswrapper[4808]: I0217 16:17:22.253953 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="proxy-httpd" containerID="cri-o://d73ac62ad3bfcdefb51a665f43bfa062a8308099aae6c2d45cb612f3752adbbe" gracePeriod=30 Feb 17 16:17:22 crc kubenswrapper[4808]: I0217 16:17:22.253897 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-central-agent" containerID="cri-o://b2074f66b52d0ee5fc07e0dd48e5b9610e713f89e070fa2279a74046e30629e5" gracePeriod=30 Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.188332 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a2bf674-1881-41e9-9c0f-93e8f14ac222" path="/var/lib/kubelet/pods/0a2bf674-1881-41e9-9c0f-93e8f14ac222/volumes" Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.202662 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"65ea994e-22f1-4dbf-8b79-8810148fad94","Type":"ContainerStarted","Data":"e7d21c872fa4c721be582bc5512fce9ea8639756444f3305678af814ac6cbd4d"} Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.202723 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"65ea994e-22f1-4dbf-8b79-8810148fad94","Type":"ContainerStarted","Data":"1aaa23450d14170763e407fef48c651573ad4a50cf0158720864da2982c04494"} Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.202782 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.206013 4808 generic.go:334] "Generic (PLEG): container finished" podID="9e219b86-d82e-47f5-b071-c44ce0695362" containerID="d73ac62ad3bfcdefb51a665f43bfa062a8308099aae6c2d45cb612f3752adbbe" exitCode=0 Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.206035 4808 generic.go:334] "Generic (PLEG): container finished" podID="9e219b86-d82e-47f5-b071-c44ce0695362" containerID="14e92a83abc11738c2e58494b921f0dba3aa3b66f55a3affc10d2417c6785a90" exitCode=2 Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.206044 4808 generic.go:334] "Generic (PLEG): container finished" podID="9e219b86-d82e-47f5-b071-c44ce0695362" containerID="b2074f66b52d0ee5fc07e0dd48e5b9610e713f89e070fa2279a74046e30629e5" exitCode=0 Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.206063 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerDied","Data":"d73ac62ad3bfcdefb51a665f43bfa062a8308099aae6c2d45cb612f3752adbbe"} Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.206085 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerDied","Data":"14e92a83abc11738c2e58494b921f0dba3aa3b66f55a3affc10d2417c6785a90"} Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.206095 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerDied","Data":"b2074f66b52d0ee5fc07e0dd48e5b9610e713f89e070fa2279a74046e30629e5"} Feb 17 16:17:23 crc kubenswrapper[4808]: I0217 16:17:23.222369 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.7956094390000001 podStartE2EDuration="2.222352426s" podCreationTimestamp="2026-02-17 16:17:21 +0000 UTC" firstStartedPulling="2026-02-17 16:17:22.204533312 +0000 UTC m=+1405.720892375" lastFinishedPulling="2026-02-17 16:17:22.631276289 +0000 UTC m=+1406.147635362" observedRunningTime="2026-02-17 16:17:23.220917857 +0000 UTC m=+1406.737276940" watchObservedRunningTime="2026-02-17 16:17:23.222352426 +0000 UTC m=+1406.738711499" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.227550 4808 generic.go:334] "Generic (PLEG): container finished" podID="9e219b86-d82e-47f5-b071-c44ce0695362" containerID="8a9460318021d21a8c095dc46b0f6d2b923e1d1fb20312230919800b64c327bf" exitCode=0 Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.227606 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerDied","Data":"8a9460318021d21a8c095dc46b0f6d2b923e1d1fb20312230919800b64c327bf"} Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.228061 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e219b86-d82e-47f5-b071-c44ce0695362","Type":"ContainerDied","Data":"48499d1ccd18294cde816d0461ae46337409d9b91f256c480873ba6063c87133"} Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.228075 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48499d1ccd18294cde816d0461ae46337409d9b91f256c480873ba6063c87133" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.254973 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.423761 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-scripts\") pod \"9e219b86-d82e-47f5-b071-c44ce0695362\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.423814 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj867\" (UniqueName: \"kubernetes.io/projected/9e219b86-d82e-47f5-b071-c44ce0695362-kube-api-access-gj867\") pod \"9e219b86-d82e-47f5-b071-c44ce0695362\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.423879 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-run-httpd\") pod \"9e219b86-d82e-47f5-b071-c44ce0695362\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.423969 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-sg-core-conf-yaml\") pod \"9e219b86-d82e-47f5-b071-c44ce0695362\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.424002 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-log-httpd\") pod \"9e219b86-d82e-47f5-b071-c44ce0695362\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.424120 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-config-data\") pod \"9e219b86-d82e-47f5-b071-c44ce0695362\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.424167 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-combined-ca-bundle\") pod \"9e219b86-d82e-47f5-b071-c44ce0695362\" (UID: \"9e219b86-d82e-47f5-b071-c44ce0695362\") " Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.424820 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9e219b86-d82e-47f5-b071-c44ce0695362" (UID: "9e219b86-d82e-47f5-b071-c44ce0695362"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.424903 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9e219b86-d82e-47f5-b071-c44ce0695362" (UID: "9e219b86-d82e-47f5-b071-c44ce0695362"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.429672 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-scripts" (OuterVolumeSpecName: "scripts") pod "9e219b86-d82e-47f5-b071-c44ce0695362" (UID: "9e219b86-d82e-47f5-b071-c44ce0695362"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.432458 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e219b86-d82e-47f5-b071-c44ce0695362-kube-api-access-gj867" (OuterVolumeSpecName: "kube-api-access-gj867") pod "9e219b86-d82e-47f5-b071-c44ce0695362" (UID: "9e219b86-d82e-47f5-b071-c44ce0695362"). InnerVolumeSpecName "kube-api-access-gj867". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.464949 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9e219b86-d82e-47f5-b071-c44ce0695362" (UID: "9e219b86-d82e-47f5-b071-c44ce0695362"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.524433 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e219b86-d82e-47f5-b071-c44ce0695362" (UID: "9e219b86-d82e-47f5-b071-c44ce0695362"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.526234 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.526267 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.526282 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e219b86-d82e-47f5-b071-c44ce0695362-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.526293 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.526305 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.526316 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj867\" (UniqueName: \"kubernetes.io/projected/9e219b86-d82e-47f5-b071-c44ce0695362-kube-api-access-gj867\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.560425 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-config-data" (OuterVolumeSpecName: "config-data") pod "9e219b86-d82e-47f5-b071-c44ce0695362" (UID: "9e219b86-d82e-47f5-b071-c44ce0695362"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:25 crc kubenswrapper[4808]: I0217 16:17:25.628165 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e219b86-d82e-47f5-b071-c44ce0695362-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.237640 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.276990 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.288167 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.299425 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:26 crc kubenswrapper[4808]: E0217 16:17:26.299822 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-notification-agent" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.299840 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-notification-agent" Feb 17 16:17:26 crc kubenswrapper[4808]: E0217 16:17:26.299854 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="proxy-httpd" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.299861 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="proxy-httpd" Feb 17 16:17:26 crc kubenswrapper[4808]: E0217 16:17:26.299879 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="sg-core" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.299885 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="sg-core" Feb 17 16:17:26 crc kubenswrapper[4808]: E0217 16:17:26.299902 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-central-agent" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.299908 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-central-agent" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.300076 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-central-agent" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.300092 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="ceilometer-notification-agent" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.300107 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="proxy-httpd" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.300118 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" containerName="sg-core" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.302025 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.304422 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.304876 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.308917 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.318825 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.446601 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.446668 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.446923 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-scripts\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.447023 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-config-data\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.447203 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwssk\" (UniqueName: \"kubernetes.io/projected/28d43ac9-e802-4679-a989-5032d56ea9dd-kube-api-access-fwssk\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.447248 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-run-httpd\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.447306 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.447480 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-log-httpd\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549081 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-log-httpd\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549209 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549235 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549294 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-scripts\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549334 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-config-data\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549406 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwssk\" (UniqueName: \"kubernetes.io/projected/28d43ac9-e802-4679-a989-5032d56ea9dd-kube-api-access-fwssk\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549434 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-run-httpd\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.549660 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-log-httpd\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.550429 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-run-httpd\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.550462 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.555471 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.555616 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.556541 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-config-data\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.556691 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.560801 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-scripts\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.574220 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwssk\" (UniqueName: \"kubernetes.io/projected/28d43ac9-e802-4679-a989-5032d56ea9dd-kube-api-access-fwssk\") pod \"ceilometer-0\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " pod="openstack/ceilometer-0" Feb 17 16:17:26 crc kubenswrapper[4808]: I0217 16:17:26.620069 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:27 crc kubenswrapper[4808]: I0217 16:17:27.169722 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e219b86-d82e-47f5-b071-c44ce0695362" path="/var/lib/kubelet/pods/9e219b86-d82e-47f5-b071-c44ce0695362/volumes" Feb 17 16:17:27 crc kubenswrapper[4808]: I0217 16:17:27.171609 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:27 crc kubenswrapper[4808]: I0217 16:17:27.249215 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerStarted","Data":"ab32feefa5626c6c7de2470473cdca164dd77fd77015ec801b8e2ecef92b4ac6"} Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.279545 4808 generic.go:334] "Generic (PLEG): container finished" podID="67800510-1957-448c-88a1-0d2898a6524b" containerID="93feefbbf60d56afc10b9bf64ecb3070c5634d6555929b547ee15577ff50a6aa" exitCode=137 Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.279920 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"67800510-1957-448c-88a1-0d2898a6524b","Type":"ContainerDied","Data":"93feefbbf60d56afc10b9bf64ecb3070c5634d6555929b547ee15577ff50a6aa"} Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.282379 4808 generic.go:334] "Generic (PLEG): container finished" podID="018b3b96-1953-4437-83ab-99bc970bcd36" containerID="6ef8e3bebfc9cfcadeefd087d4fa6251ebd40b4d37426989452bb671f4dca959" exitCode=137 Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.282406 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018b3b96-1953-4437-83ab-99bc970bcd36","Type":"ContainerDied","Data":"6ef8e3bebfc9cfcadeefd087d4fa6251ebd40b4d37426989452bb671f4dca959"} Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.554843 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.561116 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.698784 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tml77\" (UniqueName: \"kubernetes.io/projected/67800510-1957-448c-88a1-0d2898a6524b-kube-api-access-tml77\") pod \"67800510-1957-448c-88a1-0d2898a6524b\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.698915 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-config-data\") pod \"67800510-1957-448c-88a1-0d2898a6524b\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.698956 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb4wj\" (UniqueName: \"kubernetes.io/projected/018b3b96-1953-4437-83ab-99bc970bcd36-kube-api-access-mb4wj\") pod \"018b3b96-1953-4437-83ab-99bc970bcd36\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.699034 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018b3b96-1953-4437-83ab-99bc970bcd36-logs\") pod \"018b3b96-1953-4437-83ab-99bc970bcd36\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.699084 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-combined-ca-bundle\") pod \"018b3b96-1953-4437-83ab-99bc970bcd36\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.699151 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-config-data\") pod \"018b3b96-1953-4437-83ab-99bc970bcd36\" (UID: \"018b3b96-1953-4437-83ab-99bc970bcd36\") " Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.699245 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-combined-ca-bundle\") pod \"67800510-1957-448c-88a1-0d2898a6524b\" (UID: \"67800510-1957-448c-88a1-0d2898a6524b\") " Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.700058 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/018b3b96-1953-4437-83ab-99bc970bcd36-logs" (OuterVolumeSpecName: "logs") pod "018b3b96-1953-4437-83ab-99bc970bcd36" (UID: "018b3b96-1953-4437-83ab-99bc970bcd36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.710839 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67800510-1957-448c-88a1-0d2898a6524b-kube-api-access-tml77" (OuterVolumeSpecName: "kube-api-access-tml77") pod "67800510-1957-448c-88a1-0d2898a6524b" (UID: "67800510-1957-448c-88a1-0d2898a6524b"). InnerVolumeSpecName "kube-api-access-tml77". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.711887 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018b3b96-1953-4437-83ab-99bc970bcd36-kube-api-access-mb4wj" (OuterVolumeSpecName: "kube-api-access-mb4wj") pod "018b3b96-1953-4437-83ab-99bc970bcd36" (UID: "018b3b96-1953-4437-83ab-99bc970bcd36"). InnerVolumeSpecName "kube-api-access-mb4wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.733467 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-config-data" (OuterVolumeSpecName: "config-data") pod "018b3b96-1953-4437-83ab-99bc970bcd36" (UID: "018b3b96-1953-4437-83ab-99bc970bcd36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.735457 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "018b3b96-1953-4437-83ab-99bc970bcd36" (UID: "018b3b96-1953-4437-83ab-99bc970bcd36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.740903 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-config-data" (OuterVolumeSpecName: "config-data") pod "67800510-1957-448c-88a1-0d2898a6524b" (UID: "67800510-1957-448c-88a1-0d2898a6524b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.740472 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67800510-1957-448c-88a1-0d2898a6524b" (UID: "67800510-1957-448c-88a1-0d2898a6524b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.801670 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.801709 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tml77\" (UniqueName: \"kubernetes.io/projected/67800510-1957-448c-88a1-0d2898a6524b-kube-api-access-tml77\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.801721 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67800510-1957-448c-88a1-0d2898a6524b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.801730 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb4wj\" (UniqueName: \"kubernetes.io/projected/018b3b96-1953-4437-83ab-99bc970bcd36-kube-api-access-mb4wj\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.801741 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/018b3b96-1953-4437-83ab-99bc970bcd36-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.801750 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:28 crc kubenswrapper[4808]: I0217 16:17:28.801758 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/018b3b96-1953-4437-83ab-99bc970bcd36-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.293671 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerStarted","Data":"d280b23c1a5b1af2bcce4dd612c258d4f33571abef294ea93665969a086afee4"} Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.295781 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"018b3b96-1953-4437-83ab-99bc970bcd36","Type":"ContainerDied","Data":"21c9110345aef4dc69cbeac414de965fd822d356a427b405912ce038ca889eb8"} Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.295815 4808 scope.go:117] "RemoveContainer" containerID="6ef8e3bebfc9cfcadeefd087d4fa6251ebd40b4d37426989452bb671f4dca959" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.295941 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.300489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"67800510-1957-448c-88a1-0d2898a6524b","Type":"ContainerDied","Data":"b5824b16acbd91bc8be7043e9329004ce8288b6bdf03b1752a9c0085eb731c99"} Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.300811 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.337768 4808 scope.go:117] "RemoveContainer" containerID="b61b15418b3bd37da0c8b8ccd088976fe8d71ecad15624d7a4fc984f84514eef" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.354022 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.376933 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.408113 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.418324 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: E0217 16:17:29.418976 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-metadata" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.419085 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-metadata" Feb 17 16:17:29 crc kubenswrapper[4808]: E0217 16:17:29.419164 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-log" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.419217 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-log" Feb 17 16:17:29 crc kubenswrapper[4808]: E0217 16:17:29.419279 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67800510-1957-448c-88a1-0d2898a6524b" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.419328 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="67800510-1957-448c-88a1-0d2898a6524b" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.420200 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="67800510-1957-448c-88a1-0d2898a6524b" containerName="nova-cell1-novncproxy-novncproxy" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.420298 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-log" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.420365 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" containerName="nova-metadata-metadata" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.421523 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.426830 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.432340 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.435170 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.449664 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.462548 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.464001 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.465957 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.466168 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.466481 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.476523 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.496850 4808 scope.go:117] "RemoveContainer" containerID="93feefbbf60d56afc10b9bf64ecb3070c5634d6555929b547ee15577ff50a6aa" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.515617 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4225bf1-ce01-4830-b857-2201d4e67fd6-logs\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.515685 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-config-data\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.515709 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzbxx\" (UniqueName: \"kubernetes.io/projected/f4225bf1-ce01-4830-b857-2201d4e67fd6-kube-api-access-nzbxx\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.515739 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.515866 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.617887 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhp2l\" (UniqueName: \"kubernetes.io/projected/e1acfe51-1173-4ce1-a645-d757d30e3312-kube-api-access-dhp2l\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.617979 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.618004 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.618223 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.618781 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4225bf1-ce01-4830-b857-2201d4e67fd6-logs\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.618882 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.618922 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-config-data\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.618988 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzbxx\" (UniqueName: \"kubernetes.io/projected/f4225bf1-ce01-4830-b857-2201d4e67fd6-kube-api-access-nzbxx\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.619263 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4225bf1-ce01-4830-b857-2201d4e67fd6-logs\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.619391 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.619516 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.622392 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-config-data\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.628219 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.628434 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.636659 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzbxx\" (UniqueName: \"kubernetes.io/projected/f4225bf1-ce01-4830-b857-2201d4e67fd6-kube-api-access-nzbxx\") pod \"nova-metadata-0\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.721496 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.721594 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.721662 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhp2l\" (UniqueName: \"kubernetes.io/projected/e1acfe51-1173-4ce1-a645-d757d30e3312-kube-api-access-dhp2l\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.721719 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.721741 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.724872 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.725109 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.725628 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.727149 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1acfe51-1173-4ce1-a645-d757d30e3312-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.743122 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhp2l\" (UniqueName: \"kubernetes.io/projected/e1acfe51-1173-4ce1-a645-d757d30e3312-kube-api-access-dhp2l\") pod \"nova-cell1-novncproxy-0\" (UID: \"e1acfe51-1173-4ce1-a645-d757d30e3312\") " pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.749032 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:29 crc kubenswrapper[4808]: I0217 16:17:29.814301 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.248813 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:30 crc kubenswrapper[4808]: W0217 16:17:30.256025 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4225bf1_ce01_4830_b857_2201d4e67fd6.slice/crio-b9ba282b61dd19cf7f01d6fa791c3901ce461226c81f5bc25a782cde7271b2fe WatchSource:0}: Error finding container b9ba282b61dd19cf7f01d6fa791c3901ce461226c81f5bc25a782cde7271b2fe: Status 404 returned error can't find the container with id b9ba282b61dd19cf7f01d6fa791c3901ce461226c81f5bc25a782cde7271b2fe Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.315805 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4225bf1-ce01-4830-b857-2201d4e67fd6","Type":"ContainerStarted","Data":"b9ba282b61dd19cf7f01d6fa791c3901ce461226c81f5bc25a782cde7271b2fe"} Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.321567 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerStarted","Data":"35a73f991947a0cd10731b25033a4694cf130ce52c934dc6024d1cb61cb74337"} Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.345344 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.750995 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.751482 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.751804 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:17:30 crc kubenswrapper[4808]: I0217 16:17:30.759819 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.160912 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018b3b96-1953-4437-83ab-99bc970bcd36" path="/var/lib/kubelet/pods/018b3b96-1953-4437-83ab-99bc970bcd36/volumes" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.161486 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67800510-1957-448c-88a1-0d2898a6524b" path="/var/lib/kubelet/pods/67800510-1957-448c-88a1-0d2898a6524b/volumes" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.331695 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e1acfe51-1173-4ce1-a645-d757d30e3312","Type":"ContainerStarted","Data":"f0e4e0459d4b30bcbc27bbf87d35c5a023f938b33320b620b4c3125771b4ca6f"} Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.331741 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e1acfe51-1173-4ce1-a645-d757d30e3312","Type":"ContainerStarted","Data":"5c0fe5224fe64637ee65d7c020d56249fad757b4f26c1d5910f3c48ed30b6247"} Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.336060 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerStarted","Data":"a4ab3534824b6e5095da080bc7891b4fec20af147b6023092cb6d058a442f5ed"} Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.342531 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4225bf1-ce01-4830-b857-2201d4e67fd6","Type":"ContainerStarted","Data":"ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59"} Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.342665 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4225bf1-ce01-4830-b857-2201d4e67fd6","Type":"ContainerStarted","Data":"0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a"} Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.342681 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.345248 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.354965 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.354948062 podStartE2EDuration="2.354948062s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:31.34997013 +0000 UTC m=+1414.866329203" watchObservedRunningTime="2026-02-17 16:17:31.354948062 +0000 UTC m=+1414.871307135" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.392991 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.392974817 podStartE2EDuration="2.392974817s" podCreationTimestamp="2026-02-17 16:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:31.389503436 +0000 UTC m=+1414.905862509" watchObservedRunningTime="2026-02-17 16:17:31.392974817 +0000 UTC m=+1414.909333890" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.557409 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-kf4dn"] Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.559026 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.593798 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-kf4dn"] Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.656520 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.679751 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.679825 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxgsc\" (UniqueName: \"kubernetes.io/projected/236a76a9-e108-4cb9-b76d-825e33bdad41-kube-api-access-fxgsc\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.679852 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.679878 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.679899 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.679963 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-config\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.781817 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-config\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.781934 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.781994 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxgsc\" (UniqueName: \"kubernetes.io/projected/236a76a9-e108-4cb9-b76d-825e33bdad41-kube-api-access-fxgsc\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.782033 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.782056 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.782086 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.783177 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.783892 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-config\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.784087 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.784417 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.784621 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.837670 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxgsc\" (UniqueName: \"kubernetes.io/projected/236a76a9-e108-4cb9-b76d-825e33bdad41-kube-api-access-fxgsc\") pod \"dnsmasq-dns-5fd9b586ff-kf4dn\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:31 crc kubenswrapper[4808]: I0217 16:17:31.914120 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:32 crc kubenswrapper[4808]: I0217 16:17:32.392213 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-kf4dn"] Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.358488 4808 generic.go:334] "Generic (PLEG): container finished" podID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerID="b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e" exitCode=0 Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.358563 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" event={"ID":"236a76a9-e108-4cb9-b76d-825e33bdad41","Type":"ContainerDied","Data":"b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e"} Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.359010 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" event={"ID":"236a76a9-e108-4cb9-b76d-825e33bdad41","Type":"ContainerStarted","Data":"8fe947d0790a922756d78327f84cf510a97c6419a7ba4cf6d5a3665a8b91aebe"} Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.361772 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerStarted","Data":"721c57846faaa4f40473344e9d393bd7d039388a3ea80e13d23e98986555a7ec"} Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.446589 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.172050217 podStartE2EDuration="7.446567509s" podCreationTimestamp="2026-02-17 16:17:26 +0000 UTC" firstStartedPulling="2026-02-17 16:17:27.154497842 +0000 UTC m=+1410.670856915" lastFinishedPulling="2026-02-17 16:17:32.429015134 +0000 UTC m=+1415.945374207" observedRunningTime="2026-02-17 16:17:33.438284881 +0000 UTC m=+1416.954643954" watchObservedRunningTime="2026-02-17 16:17:33.446567509 +0000 UTC m=+1416.962926572" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.465348 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vbtkb"] Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.474201 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.491880 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbtkb"] Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.633781 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mnpq\" (UniqueName: \"kubernetes.io/projected/02c5cc0b-1b55-465f-8f31-fd8575d07242-kube-api-access-6mnpq\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.633877 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-catalog-content\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.633965 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-utilities\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.736024 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mnpq\" (UniqueName: \"kubernetes.io/projected/02c5cc0b-1b55-465f-8f31-fd8575d07242-kube-api-access-6mnpq\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.736106 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-catalog-content\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.736199 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-utilities\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.736810 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-utilities\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.736808 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-catalog-content\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.763260 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mnpq\" (UniqueName: \"kubernetes.io/projected/02c5cc0b-1b55-465f-8f31-fd8575d07242-kube-api-access-6mnpq\") pod \"community-operators-vbtkb\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:33 crc kubenswrapper[4808]: I0217 16:17:33.889657 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.373407 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" event={"ID":"236a76a9-e108-4cb9-b76d-825e33bdad41","Type":"ContainerStarted","Data":"726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65"} Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.373941 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.413996 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" podStartSLOduration=3.413971288 podStartE2EDuration="3.413971288s" podCreationTimestamp="2026-02-17 16:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:34.403213673 +0000 UTC m=+1417.919572746" watchObservedRunningTime="2026-02-17 16:17:34.413971288 +0000 UTC m=+1417.930330371" Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.432199 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.432440 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-log" containerID="cri-o://8bfe96313fc0880ba2b05de73386c3a0141557df7597d80f4ca352d193fcea90" gracePeriod=30 Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.432561 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-api" containerID="cri-o://8ef043aeb841feb7820cafa9458135b261212780ed4c47c6422beb21b665b0f8" gracePeriod=30 Feb 17 16:17:34 crc kubenswrapper[4808]: W0217 16:17:34.503857 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02c5cc0b_1b55_465f_8f31_fd8575d07242.slice/crio-11e80ad30caf9ea56cfefbec7d1e89b12ad5290f08e7fc3cc6e04510e32e5b8b WatchSource:0}: Error finding container 11e80ad30caf9ea56cfefbec7d1e89b12ad5290f08e7fc3cc6e04510e32e5b8b: Status 404 returned error can't find the container with id 11e80ad30caf9ea56cfefbec7d1e89b12ad5290f08e7fc3cc6e04510e32e5b8b Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.517606 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbtkb"] Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.749598 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.749960 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:17:34 crc kubenswrapper[4808]: I0217 16:17:34.815144 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:35 crc kubenswrapper[4808]: I0217 16:17:35.384039 4808 generic.go:334] "Generic (PLEG): container finished" podID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerID="8bfe96313fc0880ba2b05de73386c3a0141557df7597d80f4ca352d193fcea90" exitCode=143 Feb 17 16:17:35 crc kubenswrapper[4808]: I0217 16:17:35.384093 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"646d437b-8ce5-47ba-8fc6-9c6451caacc8","Type":"ContainerDied","Data":"8bfe96313fc0880ba2b05de73386c3a0141557df7597d80f4ca352d193fcea90"} Feb 17 16:17:35 crc kubenswrapper[4808]: I0217 16:17:35.385693 4808 generic.go:334] "Generic (PLEG): container finished" podID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerID="e98a2e96df763da34095f5b36d490a12752ad034b23f41d68bf217b2eaf71996" exitCode=0 Feb 17 16:17:35 crc kubenswrapper[4808]: I0217 16:17:35.387078 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtkb" event={"ID":"02c5cc0b-1b55-465f-8f31-fd8575d07242","Type":"ContainerDied","Data":"e98a2e96df763da34095f5b36d490a12752ad034b23f41d68bf217b2eaf71996"} Feb 17 16:17:35 crc kubenswrapper[4808]: I0217 16:17:35.387098 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtkb" event={"ID":"02c5cc0b-1b55-465f-8f31-fd8575d07242","Type":"ContainerStarted","Data":"11e80ad30caf9ea56cfefbec7d1e89b12ad5290f08e7fc3cc6e04510e32e5b8b"} Feb 17 16:17:35 crc kubenswrapper[4808]: I0217 16:17:35.387111 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:36 crc kubenswrapper[4808]: I0217 16:17:36.397625 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtkb" event={"ID":"02c5cc0b-1b55-465f-8f31-fd8575d07242","Type":"ContainerStarted","Data":"77fe18d2b0943541237f3b74c773e3a3e36241d7ed44ba023146405de7f15ab1"} Feb 17 16:17:36 crc kubenswrapper[4808]: I0217 16:17:36.800099 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:36 crc kubenswrapper[4808]: I0217 16:17:36.800868 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-central-agent" containerID="cri-o://d280b23c1a5b1af2bcce4dd612c258d4f33571abef294ea93665969a086afee4" gracePeriod=30 Feb 17 16:17:36 crc kubenswrapper[4808]: I0217 16:17:36.801375 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="proxy-httpd" containerID="cri-o://721c57846faaa4f40473344e9d393bd7d039388a3ea80e13d23e98986555a7ec" gracePeriod=30 Feb 17 16:17:36 crc kubenswrapper[4808]: I0217 16:17:36.801497 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-notification-agent" containerID="cri-o://35a73f991947a0cd10731b25033a4694cf130ce52c934dc6024d1cb61cb74337" gracePeriod=30 Feb 17 16:17:36 crc kubenswrapper[4808]: I0217 16:17:36.801551 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="sg-core" containerID="cri-o://a4ab3534824b6e5095da080bc7891b4fec20af147b6023092cb6d058a442f5ed" gracePeriod=30 Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.411430 4808 generic.go:334] "Generic (PLEG): container finished" podID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerID="77fe18d2b0943541237f3b74c773e3a3e36241d7ed44ba023146405de7f15ab1" exitCode=0 Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.411533 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtkb" event={"ID":"02c5cc0b-1b55-465f-8f31-fd8575d07242","Type":"ContainerDied","Data":"77fe18d2b0943541237f3b74c773e3a3e36241d7ed44ba023146405de7f15ab1"} Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425002 4808 generic.go:334] "Generic (PLEG): container finished" podID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerID="721c57846faaa4f40473344e9d393bd7d039388a3ea80e13d23e98986555a7ec" exitCode=0 Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425072 4808 generic.go:334] "Generic (PLEG): container finished" podID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerID="a4ab3534824b6e5095da080bc7891b4fec20af147b6023092cb6d058a442f5ed" exitCode=2 Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425089 4808 generic.go:334] "Generic (PLEG): container finished" podID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerID="35a73f991947a0cd10731b25033a4694cf130ce52c934dc6024d1cb61cb74337" exitCode=0 Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425103 4808 generic.go:334] "Generic (PLEG): container finished" podID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerID="d280b23c1a5b1af2bcce4dd612c258d4f33571abef294ea93665969a086afee4" exitCode=0 Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425166 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerDied","Data":"721c57846faaa4f40473344e9d393bd7d039388a3ea80e13d23e98986555a7ec"} Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425214 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerDied","Data":"a4ab3534824b6e5095da080bc7891b4fec20af147b6023092cb6d058a442f5ed"} Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425227 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerDied","Data":"35a73f991947a0cd10731b25033a4694cf130ce52c934dc6024d1cb61cb74337"} Feb 17 16:17:37 crc kubenswrapper[4808]: I0217 16:17:37.425241 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerDied","Data":"d280b23c1a5b1af2bcce4dd612c258d4f33571abef294ea93665969a086afee4"} Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.341903 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.429885 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-config-data\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.429984 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-ceilometer-tls-certs\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.430032 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-combined-ca-bundle\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.430157 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-scripts\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.430255 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwssk\" (UniqueName: \"kubernetes.io/projected/28d43ac9-e802-4679-a989-5032d56ea9dd-kube-api-access-fwssk\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.430711 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-log-httpd\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.430736 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-run-httpd\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.430763 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-sg-core-conf-yaml\") pod \"28d43ac9-e802-4679-a989-5032d56ea9dd\" (UID: \"28d43ac9-e802-4679-a989-5032d56ea9dd\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.431802 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.432050 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.432565 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.432608 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d43ac9-e802-4679-a989-5032d56ea9dd-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.436299 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d43ac9-e802-4679-a989-5032d56ea9dd-kube-api-access-fwssk" (OuterVolumeSpecName: "kube-api-access-fwssk") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "kube-api-access-fwssk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.442591 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtkb" event={"ID":"02c5cc0b-1b55-465f-8f31-fd8575d07242","Type":"ContainerStarted","Data":"4889c213cbd2b08515c838ee226a5311661235481dfa4a53524a4c6a6346e5a6"} Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.448050 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d43ac9-e802-4679-a989-5032d56ea9dd","Type":"ContainerDied","Data":"ab32feefa5626c6c7de2470473cdca164dd77fd77015ec801b8e2ecef92b4ac6"} Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.448101 4808 scope.go:117] "RemoveContainer" containerID="721c57846faaa4f40473344e9d393bd7d039388a3ea80e13d23e98986555a7ec" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.448266 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.454442 4808 generic.go:334] "Generic (PLEG): container finished" podID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerID="8ef043aeb841feb7820cafa9458135b261212780ed4c47c6422beb21b665b0f8" exitCode=0 Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.454488 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"646d437b-8ce5-47ba-8fc6-9c6451caacc8","Type":"ContainerDied","Data":"8ef043aeb841feb7820cafa9458135b261212780ed4c47c6422beb21b665b0f8"} Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.454534 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"646d437b-8ce5-47ba-8fc6-9c6451caacc8","Type":"ContainerDied","Data":"98396bda825cd064a21268c85ea75ac821bba4f4fc3e844ab94ef3298d308124"} Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.454545 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98396bda825cd064a21268c85ea75ac821bba4f4fc3e844ab94ef3298d308124" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.455602 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-scripts" (OuterVolumeSpecName: "scripts") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.470531 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vbtkb" podStartSLOduration=3.012180004 podStartE2EDuration="5.470513788s" podCreationTimestamp="2026-02-17 16:17:33 +0000 UTC" firstStartedPulling="2026-02-17 16:17:35.387730284 +0000 UTC m=+1418.904089347" lastFinishedPulling="2026-02-17 16:17:37.846064058 +0000 UTC m=+1421.362423131" observedRunningTime="2026-02-17 16:17:38.459461325 +0000 UTC m=+1421.975820408" watchObservedRunningTime="2026-02-17 16:17:38.470513788 +0000 UTC m=+1421.986872861" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.478995 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.526170 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.534792 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.534825 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwssk\" (UniqueName: \"kubernetes.io/projected/28d43ac9-e802-4679-a989-5032d56ea9dd-kube-api-access-fwssk\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.534835 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.534843 4808 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.569764 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-config-data" (OuterVolumeSpecName: "config-data") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.571437 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28d43ac9-e802-4679-a989-5032d56ea9dd" (UID: "28d43ac9-e802-4679-a989-5032d56ea9dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.604724 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.607110 4808 scope.go:117] "RemoveContainer" containerID="a4ab3534824b6e5095da080bc7891b4fec20af147b6023092cb6d058a442f5ed" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.630742 4808 scope.go:117] "RemoveContainer" containerID="35a73f991947a0cd10731b25033a4694cf130ce52c934dc6024d1cb61cb74337" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.637185 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.637228 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d43ac9-e802-4679-a989-5032d56ea9dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.659944 4808 scope.go:117] "RemoveContainer" containerID="d280b23c1a5b1af2bcce4dd612c258d4f33571abef294ea93665969a086afee4" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.738717 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-config-data\") pod \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.738817 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d437b-8ce5-47ba-8fc6-9c6451caacc8-logs\") pod \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.738980 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-combined-ca-bundle\") pod \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.739109 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7629p\" (UniqueName: \"kubernetes.io/projected/646d437b-8ce5-47ba-8fc6-9c6451caacc8-kube-api-access-7629p\") pod \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\" (UID: \"646d437b-8ce5-47ba-8fc6-9c6451caacc8\") " Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.740216 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/646d437b-8ce5-47ba-8fc6-9c6451caacc8-logs" (OuterVolumeSpecName: "logs") pod "646d437b-8ce5-47ba-8fc6-9c6451caacc8" (UID: "646d437b-8ce5-47ba-8fc6-9c6451caacc8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.743738 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646d437b-8ce5-47ba-8fc6-9c6451caacc8-kube-api-access-7629p" (OuterVolumeSpecName: "kube-api-access-7629p") pod "646d437b-8ce5-47ba-8fc6-9c6451caacc8" (UID: "646d437b-8ce5-47ba-8fc6-9c6451caacc8"). InnerVolumeSpecName "kube-api-access-7629p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.770133 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-config-data" (OuterVolumeSpecName: "config-data") pod "646d437b-8ce5-47ba-8fc6-9c6451caacc8" (UID: "646d437b-8ce5-47ba-8fc6-9c6451caacc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.771715 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "646d437b-8ce5-47ba-8fc6-9c6451caacc8" (UID: "646d437b-8ce5-47ba-8fc6-9c6451caacc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.804723 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.816934 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.829708 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:38 crc kubenswrapper[4808]: E0217 16:17:38.830439 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-central-agent" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830459 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-central-agent" Feb 17 16:17:38 crc kubenswrapper[4808]: E0217 16:17:38.830471 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="proxy-httpd" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830479 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="proxy-httpd" Feb 17 16:17:38 crc kubenswrapper[4808]: E0217 16:17:38.830506 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-api" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830514 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-api" Feb 17 16:17:38 crc kubenswrapper[4808]: E0217 16:17:38.830531 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="sg-core" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830537 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="sg-core" Feb 17 16:17:38 crc kubenswrapper[4808]: E0217 16:17:38.830547 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-notification-agent" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830552 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-notification-agent" Feb 17 16:17:38 crc kubenswrapper[4808]: E0217 16:17:38.830565 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-log" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830575 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-log" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830778 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-api" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830791 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-notification-agent" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830799 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="sg-core" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830809 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" containerName="nova-api-log" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830817 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="ceilometer-central-agent" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.830825 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" containerName="proxy-httpd" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.832943 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.838127 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.838299 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.838415 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.843039 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.844986 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.845010 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7629p\" (UniqueName: \"kubernetes.io/projected/646d437b-8ce5-47ba-8fc6-9c6451caacc8-kube-api-access-7629p\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.845022 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/646d437b-8ce5-47ba-8fc6-9c6451caacc8-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.845030 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/646d437b-8ce5-47ba-8fc6-9c6451caacc8-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946476 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-scripts\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946538 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-run-httpd\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946578 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p8c4\" (UniqueName: \"kubernetes.io/projected/f17f0491-7507-40fb-a2b9-d13d2c51eed6-kube-api-access-2p8c4\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946793 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946821 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946857 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-config-data\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946898 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:38 crc kubenswrapper[4808]: I0217 16:17:38.946972 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-log-httpd\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049061 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-scripts\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049117 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-run-httpd\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049139 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p8c4\" (UniqueName: \"kubernetes.io/projected/f17f0491-7507-40fb-a2b9-d13d2c51eed6-kube-api-access-2p8c4\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049413 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049456 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049508 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-config-data\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049569 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049595 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-run-httpd\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.049764 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-log-httpd\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.050034 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-log-httpd\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.052931 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-scripts\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.053974 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-config-data\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.053991 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.055142 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.056327 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.070088 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p8c4\" (UniqueName: \"kubernetes.io/projected/f17f0491-7507-40fb-a2b9-d13d2c51eed6-kube-api-access-2p8c4\") pod \"ceilometer-0\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.153180 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.158176 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28d43ac9-e802-4679-a989-5032d56ea9dd" path="/var/lib/kubelet/pods/28d43ac9-e802-4679-a989-5032d56ea9dd/volumes" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.476849 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.509402 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.526135 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.539802 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.542148 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.545036 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.545270 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.545418 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.572207 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.661883 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.661935 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-config-data\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.662042 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b26nj\" (UniqueName: \"kubernetes.io/projected/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-kube-api-access-b26nj\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.662099 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-logs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.662124 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.662156 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.669613 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.752095 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.752133 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.764597 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.764666 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.764732 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.764783 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-config-data\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.764872 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b26nj\" (UniqueName: \"kubernetes.io/projected/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-kube-api-access-b26nj\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.764944 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-logs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.765320 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-logs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.773428 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.774427 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.775101 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.782345 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-config-data\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.805170 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b26nj\" (UniqueName: \"kubernetes.io/projected/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-kube-api-access-b26nj\") pod \"nova-api-0\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.816336 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.864109 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:39 crc kubenswrapper[4808]: I0217 16:17:39.957994 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.446257 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.503381 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b","Type":"ContainerStarted","Data":"ea9847b252efaef71e3a85841133385f61299d19b321c26d06d5bb202a3896ea"} Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.507323 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerStarted","Data":"3b118204dd16ab977f67d0447b3dc8abe3067fde9909bbf01899be9a3a24cb87"} Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.526203 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.725398 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lf98l"] Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.727010 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.731971 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.732322 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.743167 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lf98l"] Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.796314 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l866\" (UniqueName: \"kubernetes.io/projected/9a26947f-ccdc-4726-98dc-a0c08a2a198b-kube-api-access-4l866\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.796430 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.796476 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-config-data\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.796500 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-scripts\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.803881 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.804014 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.898212 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.898804 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-config-data\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.898906 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-scripts\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.899130 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l866\" (UniqueName: \"kubernetes.io/projected/9a26947f-ccdc-4726-98dc-a0c08a2a198b-kube-api-access-4l866\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.903238 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.906604 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-config-data\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.913366 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l866\" (UniqueName: \"kubernetes.io/projected/9a26947f-ccdc-4726-98dc-a0c08a2a198b-kube-api-access-4l866\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:40 crc kubenswrapper[4808]: I0217 16:17:40.922123 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-scripts\") pod \"nova-cell1-cell-mapping-lf98l\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.075843 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.158345 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="646d437b-8ce5-47ba-8fc6-9c6451caacc8" path="/var/lib/kubelet/pods/646d437b-8ce5-47ba-8fc6-9c6451caacc8/volumes" Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.521066 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b","Type":"ContainerStarted","Data":"ec8315c6142559a5476ca3a0343759e88721f0b33254f08b4740490ad769e248"} Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.521357 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b","Type":"ContainerStarted","Data":"b94e5b5414eaea5609181fe57f8eb9c5db284f5a842649aa0395af8d5e1b42e4"} Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.545192 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerStarted","Data":"c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe"} Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.545232 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerStarted","Data":"d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c"} Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.546987 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5469765840000003 podStartE2EDuration="2.546976584s" podCreationTimestamp="2026-02-17 16:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:41.545548117 +0000 UTC m=+1425.061907190" watchObservedRunningTime="2026-02-17 16:17:41.546976584 +0000 UTC m=+1425.063335657" Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.658207 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lf98l"] Feb 17 16:17:41 crc kubenswrapper[4808]: I0217 16:17:41.920756 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:17:42 crc kubenswrapper[4808]: I0217 16:17:42.016163 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-ktqh6"] Feb 17 16:17:42 crc kubenswrapper[4808]: I0217 16:17:42.028856 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerName="dnsmasq-dns" containerID="cri-o://60ea09e4f101b5eefb07143e634305b321a92f4dcd3e620b2c5a1a60a199bdae" gracePeriod=10 Feb 17 16:17:42 crc kubenswrapper[4808]: E0217 16:17:42.172262 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17dd9003_af7c_4ead_bd8a_69dd599672e1.slice/crio-60ea09e4f101b5eefb07143e634305b321a92f4dcd3e620b2c5a1a60a199bdae.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:17:42 crc kubenswrapper[4808]: I0217 16:17:42.555758 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lf98l" event={"ID":"9a26947f-ccdc-4726-98dc-a0c08a2a198b","Type":"ContainerStarted","Data":"af528ab271e814b2015501ad54dc67165447a3cd6d539f4779d4b1f395b9ad79"} Feb 17 16:17:42 crc kubenswrapper[4808]: I0217 16:17:42.557198 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lf98l" event={"ID":"9a26947f-ccdc-4726-98dc-a0c08a2a198b","Type":"ContainerStarted","Data":"2b898e02f703f3e6f00a35ddb4ceb83c7f74fbaad9c4fcf19b31734489f2f161"} Feb 17 16:17:43 crc kubenswrapper[4808]: I0217 16:17:43.500744 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.216:5353: connect: connection refused" Feb 17 16:17:43 crc kubenswrapper[4808]: I0217 16:17:43.587625 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lf98l" podStartSLOduration=3.587603681 podStartE2EDuration="3.587603681s" podCreationTimestamp="2026-02-17 16:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:43.578291694 +0000 UTC m=+1427.094650767" watchObservedRunningTime="2026-02-17 16:17:43.587603681 +0000 UTC m=+1427.103962764" Feb 17 16:17:43 crc kubenswrapper[4808]: I0217 16:17:43.890795 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:43 crc kubenswrapper[4808]: I0217 16:17:43.891171 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:43 crc kubenswrapper[4808]: I0217 16:17:43.942778 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.574961 4808 generic.go:334] "Generic (PLEG): container finished" podID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerID="60ea09e4f101b5eefb07143e634305b321a92f4dcd3e620b2c5a1a60a199bdae" exitCode=0 Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.575041 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" event={"ID":"17dd9003-af7c-4ead-bd8a-69dd599672e1","Type":"ContainerDied","Data":"60ea09e4f101b5eefb07143e634305b321a92f4dcd3e620b2c5a1a60a199bdae"} Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.632859 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.692009 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbtkb"] Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.827830 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.998211 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-config\") pod \"17dd9003-af7c-4ead-bd8a-69dd599672e1\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.998692 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-sb\") pod \"17dd9003-af7c-4ead-bd8a-69dd599672e1\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.998819 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dghr7\" (UniqueName: \"kubernetes.io/projected/17dd9003-af7c-4ead-bd8a-69dd599672e1-kube-api-access-dghr7\") pod \"17dd9003-af7c-4ead-bd8a-69dd599672e1\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.998932 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-nb\") pod \"17dd9003-af7c-4ead-bd8a-69dd599672e1\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.998995 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-swift-storage-0\") pod \"17dd9003-af7c-4ead-bd8a-69dd599672e1\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " Feb 17 16:17:44 crc kubenswrapper[4808]: I0217 16:17:44.999026 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-svc\") pod \"17dd9003-af7c-4ead-bd8a-69dd599672e1\" (UID: \"17dd9003-af7c-4ead-bd8a-69dd599672e1\") " Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.050604 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17dd9003-af7c-4ead-bd8a-69dd599672e1-kube-api-access-dghr7" (OuterVolumeSpecName: "kube-api-access-dghr7") pod "17dd9003-af7c-4ead-bd8a-69dd599672e1" (UID: "17dd9003-af7c-4ead-bd8a-69dd599672e1"). InnerVolumeSpecName "kube-api-access-dghr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.093465 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "17dd9003-af7c-4ead-bd8a-69dd599672e1" (UID: "17dd9003-af7c-4ead-bd8a-69dd599672e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.093489 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "17dd9003-af7c-4ead-bd8a-69dd599672e1" (UID: "17dd9003-af7c-4ead-bd8a-69dd599672e1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.094188 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "17dd9003-af7c-4ead-bd8a-69dd599672e1" (UID: "17dd9003-af7c-4ead-bd8a-69dd599672e1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.106062 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-config" (OuterVolumeSpecName: "config") pod "17dd9003-af7c-4ead-bd8a-69dd599672e1" (UID: "17dd9003-af7c-4ead-bd8a-69dd599672e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.107373 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dghr7\" (UniqueName: \"kubernetes.io/projected/17dd9003-af7c-4ead-bd8a-69dd599672e1-kube-api-access-dghr7\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.107493 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.107589 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.107669 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.107755 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.112638 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "17dd9003-af7c-4ead-bd8a-69dd599672e1" (UID: "17dd9003-af7c-4ead-bd8a-69dd599672e1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.210070 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17dd9003-af7c-4ead-bd8a-69dd599672e1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.590816 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" event={"ID":"17dd9003-af7c-4ead-bd8a-69dd599672e1","Type":"ContainerDied","Data":"6041d8f48336fb9f3aea4819de5b72096ec393680040db5b6c883b60b9ab2c94"} Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.591136 4808 scope.go:117] "RemoveContainer" containerID="60ea09e4f101b5eefb07143e634305b321a92f4dcd3e620b2c5a1a60a199bdae" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.590828 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-ktqh6" Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.596972 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerStarted","Data":"5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22"} Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.649882 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-ktqh6"] Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.670003 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-ktqh6"] Feb 17 16:17:45 crc kubenswrapper[4808]: I0217 16:17:45.677023 4808 scope.go:117] "RemoveContainer" containerID="3ef21441db2673d8cb4a73235d72eeb9fb765f3ab14514345fdd78ed72a42293" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.585830 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-scd77"] Feb 17 16:17:46 crc kubenswrapper[4808]: E0217 16:17:46.586229 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerName="init" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.586246 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerName="init" Feb 17 16:17:46 crc kubenswrapper[4808]: E0217 16:17:46.586273 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerName="dnsmasq-dns" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.586280 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerName="dnsmasq-dns" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.586461 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" containerName="dnsmasq-dns" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.587940 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.597658 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-scd77"] Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.607740 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vbtkb" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="registry-server" containerID="cri-o://4889c213cbd2b08515c838ee226a5311661235481dfa4a53524a4c6a6346e5a6" gracePeriod=2 Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.742065 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rlpm\" (UniqueName: \"kubernetes.io/projected/fdd136e1-cf53-4300-9df6-53bfb28905cd-kube-api-access-4rlpm\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.742112 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-catalog-content\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.742402 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-utilities\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.844414 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-utilities\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.845095 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-utilities\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.845128 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rlpm\" (UniqueName: \"kubernetes.io/projected/fdd136e1-cf53-4300-9df6-53bfb28905cd-kube-api-access-4rlpm\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.845245 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-catalog-content\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.845984 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-catalog-content\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.874232 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rlpm\" (UniqueName: \"kubernetes.io/projected/fdd136e1-cf53-4300-9df6-53bfb28905cd-kube-api-access-4rlpm\") pod \"redhat-operators-scd77\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:46 crc kubenswrapper[4808]: I0217 16:17:46.909432 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.159132 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17dd9003-af7c-4ead-bd8a-69dd599672e1" path="/var/lib/kubelet/pods/17dd9003-af7c-4ead-bd8a-69dd599672e1/volumes" Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.647882 4808 generic.go:334] "Generic (PLEG): container finished" podID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerID="4889c213cbd2b08515c838ee226a5311661235481dfa4a53524a4c6a6346e5a6" exitCode=0 Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.648156 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtkb" event={"ID":"02c5cc0b-1b55-465f-8f31-fd8575d07242","Type":"ContainerDied","Data":"4889c213cbd2b08515c838ee226a5311661235481dfa4a53524a4c6a6346e5a6"} Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.838550 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.971001 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mnpq\" (UniqueName: \"kubernetes.io/projected/02c5cc0b-1b55-465f-8f31-fd8575d07242-kube-api-access-6mnpq\") pod \"02c5cc0b-1b55-465f-8f31-fd8575d07242\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.971162 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-catalog-content\") pod \"02c5cc0b-1b55-465f-8f31-fd8575d07242\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.971253 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-utilities\") pod \"02c5cc0b-1b55-465f-8f31-fd8575d07242\" (UID: \"02c5cc0b-1b55-465f-8f31-fd8575d07242\") " Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.972266 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-utilities" (OuterVolumeSpecName: "utilities") pod "02c5cc0b-1b55-465f-8f31-fd8575d07242" (UID: "02c5cc0b-1b55-465f-8f31-fd8575d07242"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:47 crc kubenswrapper[4808]: I0217 16:17:47.975481 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c5cc0b-1b55-465f-8f31-fd8575d07242-kube-api-access-6mnpq" (OuterVolumeSpecName: "kube-api-access-6mnpq") pod "02c5cc0b-1b55-465f-8f31-fd8575d07242" (UID: "02c5cc0b-1b55-465f-8f31-fd8575d07242"). InnerVolumeSpecName "kube-api-access-6mnpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.023605 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02c5cc0b-1b55-465f-8f31-fd8575d07242" (UID: "02c5cc0b-1b55-465f-8f31-fd8575d07242"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.074866 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.074931 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mnpq\" (UniqueName: \"kubernetes.io/projected/02c5cc0b-1b55-465f-8f31-fd8575d07242-kube-api-access-6mnpq\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.074956 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c5cc0b-1b55-465f-8f31-fd8575d07242-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.078978 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-scd77"] Feb 17 16:17:48 crc kubenswrapper[4808]: W0217 16:17:48.090102 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdd136e1_cf53_4300_9df6_53bfb28905cd.slice/crio-beb497e4573909af9da6473ab6ad5239876480309153dc5a4dbda0c71e03d0d1 WatchSource:0}: Error finding container beb497e4573909af9da6473ab6ad5239876480309153dc5a4dbda0c71e03d0d1: Status 404 returned error can't find the container with id beb497e4573909af9da6473ab6ad5239876480309153dc5a4dbda0c71e03d0d1 Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.662637 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerStarted","Data":"de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689"} Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.663170 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.667170 4808 generic.go:334] "Generic (PLEG): container finished" podID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerID="4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914" exitCode=0 Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.667422 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scd77" event={"ID":"fdd136e1-cf53-4300-9df6-53bfb28905cd","Type":"ContainerDied","Data":"4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914"} Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.667457 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scd77" event={"ID":"fdd136e1-cf53-4300-9df6-53bfb28905cd","Type":"ContainerStarted","Data":"beb497e4573909af9da6473ab6ad5239876480309153dc5a4dbda0c71e03d0d1"} Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.671256 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbtkb" event={"ID":"02c5cc0b-1b55-465f-8f31-fd8575d07242","Type":"ContainerDied","Data":"11e80ad30caf9ea56cfefbec7d1e89b12ad5290f08e7fc3cc6e04510e32e5b8b"} Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.671300 4808 scope.go:117] "RemoveContainer" containerID="4889c213cbd2b08515c838ee226a5311661235481dfa4a53524a4c6a6346e5a6" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.671426 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbtkb" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.692459 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.77610477 podStartE2EDuration="10.69244161s" podCreationTimestamp="2026-02-17 16:17:38 +0000 UTC" firstStartedPulling="2026-02-17 16:17:39.674469118 +0000 UTC m=+1423.190828191" lastFinishedPulling="2026-02-17 16:17:47.590805948 +0000 UTC m=+1431.107165031" observedRunningTime="2026-02-17 16:17:48.688022963 +0000 UTC m=+1432.204382076" watchObservedRunningTime="2026-02-17 16:17:48.69244161 +0000 UTC m=+1432.208800683" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.695238 4808 scope.go:117] "RemoveContainer" containerID="77fe18d2b0943541237f3b74c773e3a3e36241d7ed44ba023146405de7f15ab1" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.730811 4808 scope.go:117] "RemoveContainer" containerID="e98a2e96df763da34095f5b36d490a12752ad034b23f41d68bf217b2eaf71996" Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.735884 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbtkb"] Feb 17 16:17:48 crc kubenswrapper[4808]: I0217 16:17:48.750646 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vbtkb"] Feb 17 16:17:49 crc kubenswrapper[4808]: I0217 16:17:49.161627 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" path="/var/lib/kubelet/pods/02c5cc0b-1b55-465f-8f31-fd8575d07242/volumes" Feb 17 16:17:49 crc kubenswrapper[4808]: I0217 16:17:49.756984 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:17:49 crc kubenswrapper[4808]: I0217 16:17:49.757057 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:17:49 crc kubenswrapper[4808]: I0217 16:17:49.765313 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:17:49 crc kubenswrapper[4808]: I0217 16:17:49.767988 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:17:49 crc kubenswrapper[4808]: I0217 16:17:49.871833 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:17:49 crc kubenswrapper[4808]: I0217 16:17:49.871895 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:17:50 crc kubenswrapper[4808]: I0217 16:17:50.699801 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scd77" event={"ID":"fdd136e1-cf53-4300-9df6-53bfb28905cd","Type":"ContainerStarted","Data":"70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5"} Feb 17 16:17:50 crc kubenswrapper[4808]: I0217 16:17:50.701838 4808 generic.go:334] "Generic (PLEG): container finished" podID="9a26947f-ccdc-4726-98dc-a0c08a2a198b" containerID="af528ab271e814b2015501ad54dc67165447a3cd6d539f4779d4b1f395b9ad79" exitCode=0 Feb 17 16:17:50 crc kubenswrapper[4808]: I0217 16:17:50.701865 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lf98l" event={"ID":"9a26947f-ccdc-4726-98dc-a0c08a2a198b","Type":"ContainerDied","Data":"af528ab271e814b2015501ad54dc67165447a3cd6d539f4779d4b1f395b9ad79"} Feb 17 16:17:50 crc kubenswrapper[4808]: I0217 16:17:50.884853 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:50 crc kubenswrapper[4808]: I0217 16:17:50.884886 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.168223 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.367404 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-combined-ca-bundle\") pod \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.367707 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-scripts\") pod \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.367787 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-config-data\") pod \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.368299 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l866\" (UniqueName: \"kubernetes.io/projected/9a26947f-ccdc-4726-98dc-a0c08a2a198b-kube-api-access-4l866\") pod \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\" (UID: \"9a26947f-ccdc-4726-98dc-a0c08a2a198b\") " Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.372998 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-scripts" (OuterVolumeSpecName: "scripts") pod "9a26947f-ccdc-4726-98dc-a0c08a2a198b" (UID: "9a26947f-ccdc-4726-98dc-a0c08a2a198b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.381785 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a26947f-ccdc-4726-98dc-a0c08a2a198b-kube-api-access-4l866" (OuterVolumeSpecName: "kube-api-access-4l866") pod "9a26947f-ccdc-4726-98dc-a0c08a2a198b" (UID: "9a26947f-ccdc-4726-98dc-a0c08a2a198b"). InnerVolumeSpecName "kube-api-access-4l866". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.403552 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-config-data" (OuterVolumeSpecName: "config-data") pod "9a26947f-ccdc-4726-98dc-a0c08a2a198b" (UID: "9a26947f-ccdc-4726-98dc-a0c08a2a198b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.408621 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a26947f-ccdc-4726-98dc-a0c08a2a198b" (UID: "9a26947f-ccdc-4726-98dc-a0c08a2a198b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.475393 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.475817 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l866\" (UniqueName: \"kubernetes.io/projected/9a26947f-ccdc-4726-98dc-a0c08a2a198b-kube-api-access-4l866\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.475922 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.476032 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9a26947f-ccdc-4726-98dc-a0c08a2a198b-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.737593 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lf98l" event={"ID":"9a26947f-ccdc-4726-98dc-a0c08a2a198b","Type":"ContainerDied","Data":"2b898e02f703f3e6f00a35ddb4ceb83c7f74fbaad9c4fcf19b31734489f2f161"} Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.737634 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b898e02f703f3e6f00a35ddb4ceb83c7f74fbaad9c4fcf19b31734489f2f161" Feb 17 16:17:52 crc kubenswrapper[4808]: I0217 16:17:52.737731 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lf98l" Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.017448 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.018288 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-log" containerID="cri-o://b94e5b5414eaea5609181fe57f8eb9c5db284f5a842649aa0395af8d5e1b42e4" gracePeriod=30 Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.018352 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-api" containerID="cri-o://ec8315c6142559a5476ca3a0343759e88721f0b33254f08b4740490ad769e248" gracePeriod=30 Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.040537 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.040948 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c906d5a8-4187-4f58-a352-fa7faea85309" containerName="nova-scheduler-scheduler" containerID="cri-o://d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372" gracePeriod=30 Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.065650 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.065917 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-log" containerID="cri-o://0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a" gracePeriod=30 Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.066109 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-metadata" containerID="cri-o://ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59" gracePeriod=30 Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.747132 4808 generic.go:334] "Generic (PLEG): container finished" podID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerID="b94e5b5414eaea5609181fe57f8eb9c5db284f5a842649aa0395af8d5e1b42e4" exitCode=143 Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.747211 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b","Type":"ContainerDied","Data":"b94e5b5414eaea5609181fe57f8eb9c5db284f5a842649aa0395af8d5e1b42e4"} Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.748860 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerID="0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a" exitCode=143 Feb 17 16:17:53 crc kubenswrapper[4808]: I0217 16:17:53.748895 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4225bf1-ce01-4830-b857-2201d4e67fd6","Type":"ContainerDied","Data":"0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a"} Feb 17 16:17:55 crc kubenswrapper[4808]: E0217 16:17:55.562832 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372 is running failed: container process not found" containerID="d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:17:55 crc kubenswrapper[4808]: E0217 16:17:55.564152 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372 is running failed: container process not found" containerID="d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:17:55 crc kubenswrapper[4808]: E0217 16:17:55.564692 4808 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372 is running failed: container process not found" containerID="d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 17 16:17:55 crc kubenswrapper[4808]: E0217 16:17:55.564749 4808 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c906d5a8-4187-4f58-a352-fa7faea85309" containerName="nova-scheduler-scheduler" Feb 17 16:17:55 crc kubenswrapper[4808]: I0217 16:17:55.779864 4808 generic.go:334] "Generic (PLEG): container finished" podID="c906d5a8-4187-4f58-a352-fa7faea85309" containerID="d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372" exitCode=0 Feb 17 16:17:55 crc kubenswrapper[4808]: I0217 16:17:55.779968 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c906d5a8-4187-4f58-a352-fa7faea85309","Type":"ContainerDied","Data":"d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372"} Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.213868 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": read tcp 10.217.0.2:46540->10.217.0.223:8775: read: connection reset by peer" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.214261 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.223:8775/\": read tcp 10.217.0.2:46528->10.217.0.223:8775: read: connection reset by peer" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.368053 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.455235 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crb6r\" (UniqueName: \"kubernetes.io/projected/c906d5a8-4187-4f58-a352-fa7faea85309-kube-api-access-crb6r\") pod \"c906d5a8-4187-4f58-a352-fa7faea85309\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.455456 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-config-data\") pod \"c906d5a8-4187-4f58-a352-fa7faea85309\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.455534 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-combined-ca-bundle\") pod \"c906d5a8-4187-4f58-a352-fa7faea85309\" (UID: \"c906d5a8-4187-4f58-a352-fa7faea85309\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.476818 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c906d5a8-4187-4f58-a352-fa7faea85309-kube-api-access-crb6r" (OuterVolumeSpecName: "kube-api-access-crb6r") pod "c906d5a8-4187-4f58-a352-fa7faea85309" (UID: "c906d5a8-4187-4f58-a352-fa7faea85309"). InnerVolumeSpecName "kube-api-access-crb6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.508754 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c906d5a8-4187-4f58-a352-fa7faea85309" (UID: "c906d5a8-4187-4f58-a352-fa7faea85309"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.555539 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-config-data" (OuterVolumeSpecName: "config-data") pod "c906d5a8-4187-4f58-a352-fa7faea85309" (UID: "c906d5a8-4187-4f58-a352-fa7faea85309"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.573614 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.573651 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c906d5a8-4187-4f58-a352-fa7faea85309-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.573694 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crb6r\" (UniqueName: \"kubernetes.io/projected/c906d5a8-4187-4f58-a352-fa7faea85309-kube-api-access-crb6r\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.710704 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.796513 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c906d5a8-4187-4f58-a352-fa7faea85309","Type":"ContainerDied","Data":"3a1dc36f880b404ebe891876f34b6e341baecb45367f34a30cd20f2687eeede8"} Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.796562 4808 scope.go:117] "RemoveContainer" containerID="d5693756f54d942082122949e8141932a3315f36a027840738a229e012a32372" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.796750 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.800796 4808 generic.go:334] "Generic (PLEG): container finished" podID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerID="ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59" exitCode=0 Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.800820 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.800842 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4225bf1-ce01-4830-b857-2201d4e67fd6","Type":"ContainerDied","Data":"ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59"} Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.800874 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4225bf1-ce01-4830-b857-2201d4e67fd6","Type":"ContainerDied","Data":"b9ba282b61dd19cf7f01d6fa791c3901ce461226c81f5bc25a782cde7271b2fe"} Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.829940 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.849058 4808 scope.go:117] "RemoveContainer" containerID="ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.858382 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867134 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.867628 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-log" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867640 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-log" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.867663 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a26947f-ccdc-4726-98dc-a0c08a2a198b" containerName="nova-manage" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867669 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a26947f-ccdc-4726-98dc-a0c08a2a198b" containerName="nova-manage" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.867683 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="registry-server" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867689 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="registry-server" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.867704 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="extract-utilities" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867710 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="extract-utilities" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.867726 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-metadata" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867734 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-metadata" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.867754 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="extract-content" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867759 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="extract-content" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.867771 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c906d5a8-4187-4f58-a352-fa7faea85309" containerName="nova-scheduler-scheduler" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867779 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c906d5a8-4187-4f58-a352-fa7faea85309" containerName="nova-scheduler-scheduler" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.867998 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-metadata" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.868009 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c5cc0b-1b55-465f-8f31-fd8575d07242" containerName="registry-server" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.868029 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" containerName="nova-metadata-log" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.868043 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c906d5a8-4187-4f58-a352-fa7faea85309" containerName="nova-scheduler-scheduler" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.868063 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a26947f-ccdc-4726-98dc-a0c08a2a198b" containerName="nova-manage" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.868815 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.870709 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.874388 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.880315 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-combined-ca-bundle\") pod \"f4225bf1-ce01-4830-b857-2201d4e67fd6\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.880554 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4225bf1-ce01-4830-b857-2201d4e67fd6-logs\") pod \"f4225bf1-ce01-4830-b857-2201d4e67fd6\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.880615 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-config-data\") pod \"f4225bf1-ce01-4830-b857-2201d4e67fd6\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.880670 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzbxx\" (UniqueName: \"kubernetes.io/projected/f4225bf1-ce01-4830-b857-2201d4e67fd6-kube-api-access-nzbxx\") pod \"f4225bf1-ce01-4830-b857-2201d4e67fd6\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.880720 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-nova-metadata-tls-certs\") pod \"f4225bf1-ce01-4830-b857-2201d4e67fd6\" (UID: \"f4225bf1-ce01-4830-b857-2201d4e67fd6\") " Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.881186 4808 scope.go:117] "RemoveContainer" containerID="0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.881690 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4225bf1-ce01-4830-b857-2201d4e67fd6-logs" (OuterVolumeSpecName: "logs") pod "f4225bf1-ce01-4830-b857-2201d4e67fd6" (UID: "f4225bf1-ce01-4830-b857-2201d4e67fd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.907287 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4225bf1-ce01-4830-b857-2201d4e67fd6-kube-api-access-nzbxx" (OuterVolumeSpecName: "kube-api-access-nzbxx") pod "f4225bf1-ce01-4830-b857-2201d4e67fd6" (UID: "f4225bf1-ce01-4830-b857-2201d4e67fd6"). InnerVolumeSpecName "kube-api-access-nzbxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.915058 4808 scope.go:117] "RemoveContainer" containerID="ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.915157 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4225bf1-ce01-4830-b857-2201d4e67fd6" (UID: "f4225bf1-ce01-4830-b857-2201d4e67fd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.915728 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59\": container with ID starting with ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59 not found: ID does not exist" containerID="ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.915768 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59"} err="failed to get container status \"ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59\": rpc error: code = NotFound desc = could not find container \"ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59\": container with ID starting with ce6083e495f8bd1d0bb01f3f9f8ec767b206db7820b55aab9e2d9682e9112c59 not found: ID does not exist" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.915792 4808 scope.go:117] "RemoveContainer" containerID="0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a" Feb 17 16:17:56 crc kubenswrapper[4808]: E0217 16:17:56.916697 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a\": container with ID starting with 0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a not found: ID does not exist" containerID="0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.916740 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a"} err="failed to get container status \"0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a\": rpc error: code = NotFound desc = could not find container \"0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a\": container with ID starting with 0ea7c0c9c375fd22964f8f3f8e14e0f294b4d28792f18a93ced64305d017f82a not found: ID does not exist" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.930289 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-config-data" (OuterVolumeSpecName: "config-data") pod "f4225bf1-ce01-4830-b857-2201d4e67fd6" (UID: "f4225bf1-ce01-4830-b857-2201d4e67fd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.955842 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f4225bf1-ce01-4830-b857-2201d4e67fd6" (UID: "f4225bf1-ce01-4830-b857-2201d4e67fd6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.982888 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbktk\" (UniqueName: \"kubernetes.io/projected/4481dde9-062b-48d4-ae35-b6fa96ccf94e-kube-api-access-lbktk\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.982932 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4481dde9-062b-48d4-ae35-b6fa96ccf94e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.983090 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4481dde9-062b-48d4-ae35-b6fa96ccf94e-config-data\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.983391 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4225bf1-ce01-4830-b857-2201d4e67fd6-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.983421 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.983435 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzbxx\" (UniqueName: \"kubernetes.io/projected/f4225bf1-ce01-4830-b857-2201d4e67fd6-kube-api-access-nzbxx\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.983452 4808 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:56 crc kubenswrapper[4808]: I0217 16:17:56.983464 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4225bf1-ce01-4830-b857-2201d4e67fd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.085499 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4481dde9-062b-48d4-ae35-b6fa96ccf94e-config-data\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.085645 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbktk\" (UniqueName: \"kubernetes.io/projected/4481dde9-062b-48d4-ae35-b6fa96ccf94e-kube-api-access-lbktk\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.085667 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4481dde9-062b-48d4-ae35-b6fa96ccf94e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.088828 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4481dde9-062b-48d4-ae35-b6fa96ccf94e-config-data\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.088900 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4481dde9-062b-48d4-ae35-b6fa96ccf94e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.106223 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbktk\" (UniqueName: \"kubernetes.io/projected/4481dde9-062b-48d4-ae35-b6fa96ccf94e-kube-api-access-lbktk\") pod \"nova-scheduler-0\" (UID: \"4481dde9-062b-48d4-ae35-b6fa96ccf94e\") " pod="openstack/nova-scheduler-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.136336 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.166745 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c906d5a8-4187-4f58-a352-fa7faea85309" path="/var/lib/kubelet/pods/c906d5a8-4187-4f58-a352-fa7faea85309/volumes" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.167434 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.167473 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.173655 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.178010 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.178317 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.205890 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.208780 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.294779 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.294860 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbdf54f1-8cfa-46c6-addd-bda126337c05-logs\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.294888 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrjl9\" (UniqueName: \"kubernetes.io/projected/fbdf54f1-8cfa-46c6-addd-bda126337c05-kube-api-access-jrjl9\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.294924 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.295001 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-config-data\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.396752 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.396808 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbdf54f1-8cfa-46c6-addd-bda126337c05-logs\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.396831 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrjl9\" (UniqueName: \"kubernetes.io/projected/fbdf54f1-8cfa-46c6-addd-bda126337c05-kube-api-access-jrjl9\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.396871 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.396931 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-config-data\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.397416 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fbdf54f1-8cfa-46c6-addd-bda126337c05-logs\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.402065 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.402267 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-config-data\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.402462 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fbdf54f1-8cfa-46c6-addd-bda126337c05-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.417444 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrjl9\" (UniqueName: \"kubernetes.io/projected/fbdf54f1-8cfa-46c6-addd-bda126337c05-kube-api-access-jrjl9\") pod \"nova-metadata-0\" (UID: \"fbdf54f1-8cfa-46c6-addd-bda126337c05\") " pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.493496 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.815710 4808 generic.go:334] "Generic (PLEG): container finished" podID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerID="ec8315c6142559a5476ca3a0343759e88721f0b33254f08b4740490ad769e248" exitCode=0 Feb 17 16:17:57 crc kubenswrapper[4808]: I0217 16:17:57.815791 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b","Type":"ContainerDied","Data":"ec8315c6142559a5476ca3a0343759e88721f0b33254f08b4740490ad769e248"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:57.869142 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 17 16:17:58 crc kubenswrapper[4808]: W0217 16:17:57.871258 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4481dde9_062b_48d4_ae35_b6fa96ccf94e.slice/crio-01361f852e8ff770375d1279d67e722d1f2352cff373acf2c35b5d0e7ea7e15d WatchSource:0}: Error finding container 01361f852e8ff770375d1279d67e722d1f2352cff373acf2c35b5d0e7ea7e15d: Status 404 returned error can't find the container with id 01361f852e8ff770375d1279d67e722d1f2352cff373acf2c35b5d0e7ea7e15d Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.039612 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.117090 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b26nj\" (UniqueName: \"kubernetes.io/projected/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-kube-api-access-b26nj\") pod \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.117220 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-config-data\") pod \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.117248 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-internal-tls-certs\") pod \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.117330 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-public-tls-certs\") pod \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.117362 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-logs\") pod \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.117446 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-combined-ca-bundle\") pod \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\" (UID: \"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b\") " Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.129297 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-logs" (OuterVolumeSpecName: "logs") pod "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" (UID: "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.136640 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-kube-api-access-b26nj" (OuterVolumeSpecName: "kube-api-access-b26nj") pod "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" (UID: "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b"). InnerVolumeSpecName "kube-api-access-b26nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.148273 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 17 16:17:58 crc kubenswrapper[4808]: W0217 16:17:58.151505 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbdf54f1_8cfa_46c6_addd_bda126337c05.slice/crio-57a282b68f17139d2fd56202b4246ae469dd0c8c5c5e45c1f786d59828fa465a WatchSource:0}: Error finding container 57a282b68f17139d2fd56202b4246ae469dd0c8c5c5e45c1f786d59828fa465a: Status 404 returned error can't find the container with id 57a282b68f17139d2fd56202b4246ae469dd0c8c5c5e45c1f786d59828fa465a Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.161097 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" (UID: "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.182412 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" (UID: "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.191755 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-config-data" (OuterVolumeSpecName: "config-data") pod "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" (UID: "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.211305 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" (UID: "f0fdf7ae-717a-43f1-82b8-9c87285d4b4b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.220841 4808 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.220869 4808 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-logs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.220881 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.220891 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b26nj\" (UniqueName: \"kubernetes.io/projected/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-kube-api-access-b26nj\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.220904 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.220914 4808 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.831247 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbdf54f1-8cfa-46c6-addd-bda126337c05","Type":"ContainerStarted","Data":"610af160e1941960b85a0b3a5740cab8df8fc0990aede2b062c280b582777eb1"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.831291 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbdf54f1-8cfa-46c6-addd-bda126337c05","Type":"ContainerStarted","Data":"aee42fad9d7ee53b5fdefc2286b5134b69be072ad3d32ae3e21f3e4d5364d295"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.831302 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fbdf54f1-8cfa-46c6-addd-bda126337c05","Type":"ContainerStarted","Data":"57a282b68f17139d2fd56202b4246ae469dd0c8c5c5e45c1f786d59828fa465a"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.833089 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4481dde9-062b-48d4-ae35-b6fa96ccf94e","Type":"ContainerStarted","Data":"63811202ee0ca69af9a75b2b7b90d7990ed5c27c26734790ca6227d824b4737c"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.833453 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4481dde9-062b-48d4-ae35-b6fa96ccf94e","Type":"ContainerStarted","Data":"01361f852e8ff770375d1279d67e722d1f2352cff373acf2c35b5d0e7ea7e15d"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.835168 4808 generic.go:334] "Generic (PLEG): container finished" podID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerID="70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5" exitCode=0 Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.835199 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scd77" event={"ID":"fdd136e1-cf53-4300-9df6-53bfb28905cd","Type":"ContainerDied","Data":"70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.838074 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0fdf7ae-717a-43f1-82b8-9c87285d4b4b","Type":"ContainerDied","Data":"ea9847b252efaef71e3a85841133385f61299d19b321c26d06d5bb202a3896ea"} Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.838213 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.839052 4808 scope.go:117] "RemoveContainer" containerID="ec8315c6142559a5476ca3a0343759e88721f0b33254f08b4740490ad769e248" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.865089 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.864942314 podStartE2EDuration="2.864942314s" podCreationTimestamp="2026-02-17 16:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:58.85422334 +0000 UTC m=+1442.370582423" watchObservedRunningTime="2026-02-17 16:17:58.864942314 +0000 UTC m=+1442.381301387" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.885542 4808 scope.go:117] "RemoveContainer" containerID="b94e5b5414eaea5609181fe57f8eb9c5db284f5a842649aa0395af8d5e1b42e4" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.940616 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.969038 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.987217 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:58 crc kubenswrapper[4808]: E0217 16:17:58.987716 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-api" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.987729 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-api" Feb 17 16:17:58 crc kubenswrapper[4808]: E0217 16:17:58.987746 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-log" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.987752 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-log" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.987964 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-api" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.987978 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" containerName="nova-api-log" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.989078 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.992244 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.992260 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 17 16:17:58 crc kubenswrapper[4808]: I0217 16:17:58.992364 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.006507 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.136677 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e91a7ada-9f3c-4a6c-a56e-355538c9a868-logs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.136799 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-config-data\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.136826 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-public-tls-certs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.137512 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psvn4\" (UniqueName: \"kubernetes.io/projected/e91a7ada-9f3c-4a6c-a56e-355538c9a868-kube-api-access-psvn4\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.137568 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.137602 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.156506 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0fdf7ae-717a-43f1-82b8-9c87285d4b4b" path="/var/lib/kubelet/pods/f0fdf7ae-717a-43f1-82b8-9c87285d4b4b/volumes" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.157413 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4225bf1-ce01-4830-b857-2201d4e67fd6" path="/var/lib/kubelet/pods/f4225bf1-ce01-4830-b857-2201d4e67fd6/volumes" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.240016 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.240048 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.240181 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e91a7ada-9f3c-4a6c-a56e-355538c9a868-logs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.240363 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-config-data\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.240394 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-public-tls-certs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.240478 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psvn4\" (UniqueName: \"kubernetes.io/projected/e91a7ada-9f3c-4a6c-a56e-355538c9a868-kube-api-access-psvn4\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.241120 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e91a7ada-9f3c-4a6c-a56e-355538c9a868-logs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.246259 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.246564 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-config-data\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.247504 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.250084 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e91a7ada-9f3c-4a6c-a56e-355538c9a868-public-tls-certs\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.258158 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psvn4\" (UniqueName: \"kubernetes.io/projected/e91a7ada-9f3c-4a6c-a56e-355538c9a868-kube-api-access-psvn4\") pod \"nova-api-0\" (UID: \"e91a7ada-9f3c-4a6c-a56e-355538c9a868\") " pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.309945 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 17 16:17:59 crc kubenswrapper[4808]: W0217 16:17:59.844457 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode91a7ada_9f3c_4a6c_a56e_355538c9a868.slice/crio-7282e7e0ac4296b48f614d85f28c8838489fbb4304a12d207a4d4c61a52c7cb4 WatchSource:0}: Error finding container 7282e7e0ac4296b48f614d85f28c8838489fbb4304a12d207a4d4c61a52c7cb4: Status 404 returned error can't find the container with id 7282e7e0ac4296b48f614d85f28c8838489fbb4304a12d207a4d4c61a52c7cb4 Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.849373 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.859195 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scd77" event={"ID":"fdd136e1-cf53-4300-9df6-53bfb28905cd","Type":"ContainerStarted","Data":"356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2"} Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.884836 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-scd77" podStartSLOduration=3.120631854 podStartE2EDuration="13.884818601s" podCreationTimestamp="2026-02-17 16:17:46 +0000 UTC" firstStartedPulling="2026-02-17 16:17:48.674163886 +0000 UTC m=+1432.190522979" lastFinishedPulling="2026-02-17 16:17:59.438350653 +0000 UTC m=+1442.954709726" observedRunningTime="2026-02-17 16:17:59.875047703 +0000 UTC m=+1443.391406776" watchObservedRunningTime="2026-02-17 16:17:59.884818601 +0000 UTC m=+1443.401177674" Feb 17 16:17:59 crc kubenswrapper[4808]: I0217 16:17:59.910481 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.91045298 podStartE2EDuration="2.91045298s" podCreationTimestamp="2026-02-17 16:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:17:59.899328695 +0000 UTC m=+1443.415687788" watchObservedRunningTime="2026-02-17 16:17:59.91045298 +0000 UTC m=+1443.426812043" Feb 17 16:18:00 crc kubenswrapper[4808]: I0217 16:18:00.871213 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e91a7ada-9f3c-4a6c-a56e-355538c9a868","Type":"ContainerStarted","Data":"6eaabe9155721ee1f7bc24c6493d78d1b78c85a39555ac7bb4b0f6e8d4897798"} Feb 17 16:18:00 crc kubenswrapper[4808]: I0217 16:18:00.871559 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e91a7ada-9f3c-4a6c-a56e-355538c9a868","Type":"ContainerStarted","Data":"e607105fe44353f172957e4b6be74b049fac2dfe39ce413bc8e9b4b577e1f85b"} Feb 17 16:18:00 crc kubenswrapper[4808]: I0217 16:18:00.871591 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e91a7ada-9f3c-4a6c-a56e-355538c9a868","Type":"ContainerStarted","Data":"7282e7e0ac4296b48f614d85f28c8838489fbb4304a12d207a4d4c61a52c7cb4"} Feb 17 16:18:00 crc kubenswrapper[4808]: I0217 16:18:00.910431 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.910408099 podStartE2EDuration="2.910408099s" podCreationTimestamp="2026-02-17 16:17:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:00.90586815 +0000 UTC m=+1444.422227223" watchObservedRunningTime="2026-02-17 16:18:00.910408099 +0000 UTC m=+1444.426767172" Feb 17 16:18:02 crc kubenswrapper[4808]: I0217 16:18:02.210292 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 17 16:18:02 crc kubenswrapper[4808]: I0217 16:18:02.497679 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:18:02 crc kubenswrapper[4808]: I0217 16:18:02.498029 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 17 16:18:06 crc kubenswrapper[4808]: I0217 16:18:06.909971 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:18:06 crc kubenswrapper[4808]: I0217 16:18:06.910951 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:18:07 crc kubenswrapper[4808]: I0217 16:18:07.210234 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 17 16:18:07 crc kubenswrapper[4808]: I0217 16:18:07.239829 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 17 16:18:07 crc kubenswrapper[4808]: I0217 16:18:07.495131 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:18:07 crc kubenswrapper[4808]: I0217 16:18:07.495196 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 17 16:18:07 crc kubenswrapper[4808]: I0217 16:18:07.964438 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-scd77" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="registry-server" probeResult="failure" output=< Feb 17 16:18:07 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 16:18:07 crc kubenswrapper[4808]: > Feb 17 16:18:07 crc kubenswrapper[4808]: I0217 16:18:07.974510 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 17 16:18:08 crc kubenswrapper[4808]: I0217 16:18:08.511348 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fbdf54f1-8cfa-46c6-addd-bda126337c05" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.232:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:18:08 crc kubenswrapper[4808]: I0217 16:18:08.511908 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fbdf54f1-8cfa-46c6-addd-bda126337c05" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.232:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:18:09 crc kubenswrapper[4808]: I0217 16:18:09.169683 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:18:09 crc kubenswrapper[4808]: I0217 16:18:09.310885 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:18:09 crc kubenswrapper[4808]: I0217 16:18:09.310959 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 17 16:18:10 crc kubenswrapper[4808]: I0217 16:18:10.332748 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e91a7ada-9f3c-4a6c-a56e-355538c9a868" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.233:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:18:10 crc kubenswrapper[4808]: I0217 16:18:10.332768 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e91a7ada-9f3c-4a6c-a56e-355538c9a868" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.233:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 17 16:18:16 crc kubenswrapper[4808]: I0217 16:18:16.965480 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:18:17 crc kubenswrapper[4808]: I0217 16:18:17.020914 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:18:17 crc kubenswrapper[4808]: I0217 16:18:17.502372 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:18:17 crc kubenswrapper[4808]: I0217 16:18:17.503322 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 17 16:18:17 crc kubenswrapper[4808]: I0217 16:18:17.521851 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:18:17 crc kubenswrapper[4808]: I0217 16:18:17.791986 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-scd77"] Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.041705 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-scd77" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="registry-server" containerID="cri-o://356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2" gracePeriod=2 Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.054839 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.628783 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.737337 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-utilities\") pod \"fdd136e1-cf53-4300-9df6-53bfb28905cd\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.737459 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rlpm\" (UniqueName: \"kubernetes.io/projected/fdd136e1-cf53-4300-9df6-53bfb28905cd-kube-api-access-4rlpm\") pod \"fdd136e1-cf53-4300-9df6-53bfb28905cd\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.737486 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-catalog-content\") pod \"fdd136e1-cf53-4300-9df6-53bfb28905cd\" (UID: \"fdd136e1-cf53-4300-9df6-53bfb28905cd\") " Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.738132 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-utilities" (OuterVolumeSpecName: "utilities") pod "fdd136e1-cf53-4300-9df6-53bfb28905cd" (UID: "fdd136e1-cf53-4300-9df6-53bfb28905cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.745315 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdd136e1-cf53-4300-9df6-53bfb28905cd-kube-api-access-4rlpm" (OuterVolumeSpecName: "kube-api-access-4rlpm") pod "fdd136e1-cf53-4300-9df6-53bfb28905cd" (UID: "fdd136e1-cf53-4300-9df6-53bfb28905cd"). InnerVolumeSpecName "kube-api-access-4rlpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.840892 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.841262 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rlpm\" (UniqueName: \"kubernetes.io/projected/fdd136e1-cf53-4300-9df6-53bfb28905cd-kube-api-access-4rlpm\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.859752 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdd136e1-cf53-4300-9df6-53bfb28905cd" (UID: "fdd136e1-cf53-4300-9df6-53bfb28905cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:18 crc kubenswrapper[4808]: I0217 16:18:18.943316 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdd136e1-cf53-4300-9df6-53bfb28905cd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.068173 4808 generic.go:334] "Generic (PLEG): container finished" podID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerID="356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2" exitCode=0 Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.069109 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-scd77" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.076770 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scd77" event={"ID":"fdd136e1-cf53-4300-9df6-53bfb28905cd","Type":"ContainerDied","Data":"356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2"} Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.076847 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-scd77" event={"ID":"fdd136e1-cf53-4300-9df6-53bfb28905cd","Type":"ContainerDied","Data":"beb497e4573909af9da6473ab6ad5239876480309153dc5a4dbda0c71e03d0d1"} Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.076870 4808 scope.go:117] "RemoveContainer" containerID="356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.101948 4808 scope.go:117] "RemoveContainer" containerID="70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.116679 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-scd77"] Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.121402 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-scd77"] Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.137299 4808 scope.go:117] "RemoveContainer" containerID="4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.158146 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" path="/var/lib/kubelet/pods/fdd136e1-cf53-4300-9df6-53bfb28905cd/volumes" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.199983 4808 scope.go:117] "RemoveContainer" containerID="356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2" Feb 17 16:18:19 crc kubenswrapper[4808]: E0217 16:18:19.200519 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2\": container with ID starting with 356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2 not found: ID does not exist" containerID="356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.200559 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2"} err="failed to get container status \"356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2\": rpc error: code = NotFound desc = could not find container \"356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2\": container with ID starting with 356b63136bb36f4f253e29cd7c8a7b3e7da5036e116e56a938d183e2bd5afab2 not found: ID does not exist" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.200607 4808 scope.go:117] "RemoveContainer" containerID="70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5" Feb 17 16:18:19 crc kubenswrapper[4808]: E0217 16:18:19.200938 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5\": container with ID starting with 70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5 not found: ID does not exist" containerID="70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.200970 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5"} err="failed to get container status \"70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5\": rpc error: code = NotFound desc = could not find container \"70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5\": container with ID starting with 70c41ea11a7a6ad0cd421e097caf52b723c2e7dcd550f23abc585761684fe1f5 not found: ID does not exist" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.200992 4808 scope.go:117] "RemoveContainer" containerID="4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914" Feb 17 16:18:19 crc kubenswrapper[4808]: E0217 16:18:19.201392 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914\": container with ID starting with 4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914 not found: ID does not exist" containerID="4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.201429 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914"} err="failed to get container status \"4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914\": rpc error: code = NotFound desc = could not find container \"4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914\": container with ID starting with 4c33795a6d982c861075c31dcb5c9401341d147e1e982483729f44aa01df7914 not found: ID does not exist" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.350589 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.351105 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.387129 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:18:19 crc kubenswrapper[4808]: I0217 16:18:19.391638 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 17 16:18:20 crc kubenswrapper[4808]: I0217 16:18:20.079949 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 17 16:18:20 crc kubenswrapper[4808]: I0217 16:18:20.087424 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 17 16:18:29 crc kubenswrapper[4808]: I0217 16:18:29.954016 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-wdrmd"] Feb 17 16:18:29 crc kubenswrapper[4808]: I0217 16:18:29.972742 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-wdrmd"] Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.050314 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-zl7nk"] Feb 17 16:18:30 crc kubenswrapper[4808]: E0217 16:18:30.050881 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="extract-utilities" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.050907 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="extract-utilities" Feb 17 16:18:30 crc kubenswrapper[4808]: E0217 16:18:30.050951 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="extract-content" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.050959 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="extract-content" Feb 17 16:18:30 crc kubenswrapper[4808]: E0217 16:18:30.050993 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="registry-server" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.051002 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="registry-server" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.051235 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdd136e1-cf53-4300-9df6-53bfb28905cd" containerName="registry-server" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.052133 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.054358 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.064088 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-zl7nk"] Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.098009 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-config-data\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.098220 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-scripts\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.098549 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-certs\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.098950 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-combined-ca-bundle\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.099019 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnd2x\" (UniqueName: \"kubernetes.io/projected/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-kube-api-access-fnd2x\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.200594 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-certs\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.200665 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-combined-ca-bundle\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.201173 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnd2x\" (UniqueName: \"kubernetes.io/projected/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-kube-api-access-fnd2x\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.201403 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-config-data\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.201608 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-scripts\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.205841 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-certs\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.206116 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-combined-ca-bundle\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.206590 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-config-data\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.206995 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-scripts\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.216100 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnd2x\" (UniqueName: \"kubernetes.io/projected/a4b182d0-48fc-4487-b7ad-18f7803a4d4c-kube-api-access-fnd2x\") pod \"cloudkitty-db-sync-zl7nk\" (UID: \"a4b182d0-48fc-4487-b7ad-18f7803a4d4c\") " pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:30 crc kubenswrapper[4808]: I0217 16:18:30.403736 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-zl7nk" Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.038027 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-zl7nk"] Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.157327 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ec52dbb-ca2f-4013-8536-972042607240" path="/var/lib/kubelet/pods/2ec52dbb-ca2f-4013-8536-972042607240/volumes" Feb 17 16:18:31 crc kubenswrapper[4808]: E0217 16:18:31.162602 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:18:31 crc kubenswrapper[4808]: E0217 16:18:31.162682 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:18:31 crc kubenswrapper[4808]: E0217 16:18:31.162809 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:18:31 crc kubenswrapper[4808]: E0217 16:18:31.163977 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.188589 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-zl7nk" event={"ID":"a4b182d0-48fc-4487-b7ad-18f7803a4d4c","Type":"ContainerStarted","Data":"46a08a8f711b48444ba77a762f412674bac93643320d67f0c19168069a38f058"} Feb 17 16:18:31 crc kubenswrapper[4808]: E0217 16:18:31.190105 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.808703 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.809322 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-central-agent" containerID="cri-o://d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c" gracePeriod=30 Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.809775 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="proxy-httpd" containerID="cri-o://de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689" gracePeriod=30 Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.809886 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="sg-core" containerID="cri-o://5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22" gracePeriod=30 Feb 17 16:18:31 crc kubenswrapper[4808]: I0217 16:18:31.809998 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-notification-agent" containerID="cri-o://c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe" gracePeriod=30 Feb 17 16:18:32 crc kubenswrapper[4808]: I0217 16:18:32.201461 4808 generic.go:334] "Generic (PLEG): container finished" podID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerID="de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689" exitCode=0 Feb 17 16:18:32 crc kubenswrapper[4808]: I0217 16:18:32.201501 4808 generic.go:334] "Generic (PLEG): container finished" podID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerID="5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22" exitCode=2 Feb 17 16:18:32 crc kubenswrapper[4808]: I0217 16:18:32.201538 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerDied","Data":"de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689"} Feb 17 16:18:32 crc kubenswrapper[4808]: I0217 16:18:32.201605 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerDied","Data":"5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22"} Feb 17 16:18:32 crc kubenswrapper[4808]: E0217 16:18:32.203249 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:18:32 crc kubenswrapper[4808]: I0217 16:18:32.261685 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:18:33 crc kubenswrapper[4808]: I0217 16:18:33.213966 4808 generic.go:334] "Generic (PLEG): container finished" podID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerID="d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c" exitCode=0 Feb 17 16:18:33 crc kubenswrapper[4808]: I0217 16:18:33.214104 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerDied","Data":"d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c"} Feb 17 16:18:33 crc kubenswrapper[4808]: I0217 16:18:33.391957 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:18:35 crc kubenswrapper[4808]: I0217 16:18:35.925014 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.123938 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-config-data\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.123991 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p8c4\" (UniqueName: \"kubernetes.io/projected/f17f0491-7507-40fb-a2b9-d13d2c51eed6-kube-api-access-2p8c4\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.124024 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-ceilometer-tls-certs\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.124139 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-log-httpd\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.124198 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-scripts\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.124238 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-run-httpd\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.124311 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-sg-core-conf-yaml\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.124329 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-combined-ca-bundle\") pod \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\" (UID: \"f17f0491-7507-40fb-a2b9-d13d2c51eed6\") " Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.125112 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.125383 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.129724 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-scripts" (OuterVolumeSpecName: "scripts") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.131293 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f17f0491-7507-40fb-a2b9-d13d2c51eed6-kube-api-access-2p8c4" (OuterVolumeSpecName: "kube-api-access-2p8c4") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "kube-api-access-2p8c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.157788 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.213656 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.226555 4808 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.226650 4808 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-scripts\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.226660 4808 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f17f0491-7507-40fb-a2b9-d13d2c51eed6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.226668 4808 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.226679 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p8c4\" (UniqueName: \"kubernetes.io/projected/f17f0491-7507-40fb-a2b9-d13d2c51eed6-kube-api-access-2p8c4\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.226688 4808 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.259810 4808 generic.go:334] "Generic (PLEG): container finished" podID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerID="c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe" exitCode=0 Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.259861 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.259866 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerDied","Data":"c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe"} Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.259904 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f17f0491-7507-40fb-a2b9-d13d2c51eed6","Type":"ContainerDied","Data":"3b118204dd16ab977f67d0447b3dc8abe3067fde9909bbf01899be9a3a24cb87"} Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.259922 4808 scope.go:117] "RemoveContainer" containerID="de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.289642 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.301861 4808 scope.go:117] "RemoveContainer" containerID="5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.328512 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.330025 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-config-data" (OuterVolumeSpecName: "config-data") pod "f17f0491-7507-40fb-a2b9-d13d2c51eed6" (UID: "f17f0491-7507-40fb-a2b9-d13d2c51eed6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.337182 4808 scope.go:117] "RemoveContainer" containerID="c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.358451 4808 scope.go:117] "RemoveContainer" containerID="d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.377217 4808 scope.go:117] "RemoveContainer" containerID="de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689" Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.377808 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689\": container with ID starting with de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689 not found: ID does not exist" containerID="de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.377835 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689"} err="failed to get container status \"de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689\": rpc error: code = NotFound desc = could not find container \"de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689\": container with ID starting with de6991fc741f4dab215e9fa0e4bbfa723a35a1ad1c479d9fbf2ff2d2ef68c689 not found: ID does not exist" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.377856 4808 scope.go:117] "RemoveContainer" containerID="5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22" Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.378181 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22\": container with ID starting with 5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22 not found: ID does not exist" containerID="5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.378203 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22"} err="failed to get container status \"5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22\": rpc error: code = NotFound desc = could not find container \"5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22\": container with ID starting with 5b669a87f3e7dd40db4275e143a7c3152957d19b8ee8fd03190fac9ff4c10d22 not found: ID does not exist" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.378214 4808 scope.go:117] "RemoveContainer" containerID="c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe" Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.378450 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe\": container with ID starting with c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe not found: ID does not exist" containerID="c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.378472 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe"} err="failed to get container status \"c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe\": rpc error: code = NotFound desc = could not find container \"c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe\": container with ID starting with c0971f47e4c9c39f71e7c6f7840068671f8ad7112b616991124ea5bfcdc2d3fe not found: ID does not exist" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.378484 4808 scope.go:117] "RemoveContainer" containerID="d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c" Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.378663 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c\": container with ID starting with d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c not found: ID does not exist" containerID="d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.378682 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c"} err="failed to get container status \"d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c\": rpc error: code = NotFound desc = could not find container \"d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c\": container with ID starting with d002c2e4e3d0d68bfb48ed8610eba6f9a0ecf6103a908faf77897768a2cf9b9c not found: ID does not exist" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.430419 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f17f0491-7507-40fb-a2b9-d13d2c51eed6-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.597113 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.607426 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630062 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.630424 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="sg-core" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630440 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="sg-core" Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.630457 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-notification-agent" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630464 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-notification-agent" Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.630483 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="proxy-httpd" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630488 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="proxy-httpd" Feb 17 16:18:36 crc kubenswrapper[4808]: E0217 16:18:36.630513 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-central-agent" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630518 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-central-agent" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630723 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-central-agent" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630739 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="sg-core" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630756 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="ceilometer-notification-agent" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.630768 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" containerName="proxy-httpd" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.635260 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.637655 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.637908 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.638411 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.666124 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736147 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736215 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjgf2\" (UniqueName: \"kubernetes.io/projected/2876084b-7055-449d-9ddb-447d3a515d80-kube-api-access-rjgf2\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736248 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-config-data\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2876084b-7055-449d-9ddb-447d3a515d80-run-httpd\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736316 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-scripts\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736334 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736359 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2876084b-7055-449d-9ddb-447d3a515d80-log-httpd\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.736430 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838373 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838456 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838522 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjgf2\" (UniqueName: \"kubernetes.io/projected/2876084b-7055-449d-9ddb-447d3a515d80-kube-api-access-rjgf2\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838593 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-config-data\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838623 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2876084b-7055-449d-9ddb-447d3a515d80-run-httpd\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838702 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-scripts\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838727 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.838763 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2876084b-7055-449d-9ddb-447d3a515d80-log-httpd\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.839244 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2876084b-7055-449d-9ddb-447d3a515d80-log-httpd\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.840338 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2876084b-7055-449d-9ddb-447d3a515d80-run-httpd\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.843365 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-scripts\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.843856 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.845323 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.856907 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.857681 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjgf2\" (UniqueName: \"kubernetes.io/projected/2876084b-7055-449d-9ddb-447d3a515d80-kube-api-access-rjgf2\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.858672 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2876084b-7055-449d-9ddb-447d3a515d80-config-data\") pod \"ceilometer-0\" (UID: \"2876084b-7055-449d-9ddb-447d3a515d80\") " pod="openstack/ceilometer-0" Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.900514 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="698c36e9-5f87-4836-8660-aaceac669005" containerName="rabbitmq" containerID="cri-o://d280bb8f394e232e2279b423416261e7f2f5d4ad76577ac87b19691f2c6abe5e" gracePeriod=604796 Feb 17 16:18:36 crc kubenswrapper[4808]: I0217 16:18:36.959344 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 17 16:18:37 crc kubenswrapper[4808]: I0217 16:18:37.181062 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f17f0491-7507-40fb-a2b9-d13d2c51eed6" path="/var/lib/kubelet/pods/f17f0491-7507-40fb-a2b9-d13d2c51eed6/volumes" Feb 17 16:18:37 crc kubenswrapper[4808]: I0217 16:18:37.457010 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 17 16:18:37 crc kubenswrapper[4808]: I0217 16:18:37.557470 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerName="rabbitmq" containerID="cri-o://a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807" gracePeriod=604796 Feb 17 16:18:37 crc kubenswrapper[4808]: E0217 16:18:37.578229 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:18:37 crc kubenswrapper[4808]: E0217 16:18:37.578302 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:18:37 crc kubenswrapper[4808]: E0217 16:18:37.578430 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:18:38 crc kubenswrapper[4808]: I0217 16:18:38.286120 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2876084b-7055-449d-9ddb-447d3a515d80","Type":"ContainerStarted","Data":"f92594a71ea944bf109615e581db18efb031cc05bb8c8d28aae1396df5d993f8"} Feb 17 16:18:39 crc kubenswrapper[4808]: I0217 16:18:39.299605 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2876084b-7055-449d-9ddb-447d3a515d80","Type":"ContainerStarted","Data":"bd3198028a543422a4bd4d3a3cd25c69aef82a35267a9cbb49dca0aff6c1e668"} Feb 17 16:18:39 crc kubenswrapper[4808]: I0217 16:18:39.299998 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2876084b-7055-449d-9ddb-447d3a515d80","Type":"ContainerStarted","Data":"2d41f32d17275147482bb41cb71d9147907575108a2bbf4b49468be01106e41a"} Feb 17 16:18:40 crc kubenswrapper[4808]: E0217 16:18:40.613437 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:18:41 crc kubenswrapper[4808]: I0217 16:18:41.332072 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2876084b-7055-449d-9ddb-447d3a515d80","Type":"ContainerStarted","Data":"acb126793a2542f2fe3045ec80693fb67ee69ce5e18a3a82729621b0d384f1b3"} Feb 17 16:18:41 crc kubenswrapper[4808]: I0217 16:18:41.332354 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 17 16:18:41 crc kubenswrapper[4808]: E0217 16:18:41.335284 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:18:41 crc kubenswrapper[4808]: I0217 16:18:41.779979 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="698c36e9-5f87-4836-8660-aaceac669005" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Feb 17 16:18:42 crc kubenswrapper[4808]: I0217 16:18:42.038325 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Feb 17 16:18:42 crc kubenswrapper[4808]: E0217 16:18:42.352863 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.363831 4808 generic.go:334] "Generic (PLEG): container finished" podID="698c36e9-5f87-4836-8660-aaceac669005" containerID="d280bb8f394e232e2279b423416261e7f2f5d4ad76577ac87b19691f2c6abe5e" exitCode=0 Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.363895 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"698c36e9-5f87-4836-8660-aaceac669005","Type":"ContainerDied","Data":"d280bb8f394e232e2279b423416261e7f2f5d4ad76577ac87b19691f2c6abe5e"} Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.680309 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814428 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-plugins-conf\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814468 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/698c36e9-5f87-4836-8660-aaceac669005-pod-info\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814553 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-config-data\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814623 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-server-conf\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814694 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqv9f\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-kube-api-access-bqv9f\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814739 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/698c36e9-5f87-4836-8660-aaceac669005-erlang-cookie-secret\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814811 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-plugins\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.814845 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-tls\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.815503 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.815664 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-confd\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.815692 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-erlang-cookie\") pod \"698c36e9-5f87-4836-8660-aaceac669005\" (UID: \"698c36e9-5f87-4836-8660-aaceac669005\") " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.816194 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.816863 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.817971 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.817999 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.815800 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.824944 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-kube-api-access-bqv9f" (OuterVolumeSpecName: "kube-api-access-bqv9f") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "kube-api-access-bqv9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.836500 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.838901 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/698c36e9-5f87-4836-8660-aaceac669005-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.840183 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/698c36e9-5f87-4836-8660-aaceac669005-pod-info" (OuterVolumeSpecName: "pod-info") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: E0217 16:18:43.859320 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59be2048_a5c9_44c9_a3ef_651002555ff0.slice/crio-conmon-a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59be2048_a5c9_44c9_a3ef_651002555ff0.slice/crio-a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.864038 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4" (OuterVolumeSpecName: "persistence") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "pvc-41460aca-532a-4a4a-9959-90e4e175e3d4". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.876141 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-config-data" (OuterVolumeSpecName: "config-data") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.921129 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqv9f\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-kube-api-access-bqv9f\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.922345 4808 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/698c36e9-5f87-4836-8660-aaceac669005-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.922424 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.922497 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") on node \"crc\" " Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.922565 4808 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.922731 4808 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/698c36e9-5f87-4836-8660-aaceac669005-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.922795 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.997242 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:18:43 crc kubenswrapper[4808]: I0217 16:18:43.997449 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-41460aca-532a-4a4a-9959-90e4e175e3d4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4") on node "crc" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.010163 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.022190 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-server-conf" (OuterVolumeSpecName: "server-conf") pod "698c36e9-5f87-4836-8660-aaceac669005" (UID: "698c36e9-5f87-4836-8660-aaceac669005"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.025193 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.025220 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/698c36e9-5f87-4836-8660-aaceac669005-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.025231 4808 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/698c36e9-5f87-4836-8660-aaceac669005-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.291538 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.329929 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flvtj\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-kube-api-access-flvtj\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.330017 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-confd\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.330056 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59be2048-a5c9-44c9-a3ef-651002555ff0-pod-info\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.330083 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-plugins\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.330136 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59be2048-a5c9-44c9-a3ef-651002555ff0-erlang-cookie-secret\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.331422 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.331661 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.332448 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-server-conf\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.332540 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-config-data\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.333110 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-erlang-cookie\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.333241 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-tls\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.333331 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-plugins-conf\") pod \"59be2048-a5c9-44c9-a3ef-651002555ff0\" (UID: \"59be2048-a5c9-44c9-a3ef-651002555ff0\") " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.334276 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.334915 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.335515 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.342672 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-kube-api-access-flvtj" (OuterVolumeSpecName: "kube-api-access-flvtj") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "kube-api-access-flvtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.342813 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.349509 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/59be2048-a5c9-44c9-a3ef-651002555ff0-pod-info" (OuterVolumeSpecName: "pod-info") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.355110 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59be2048-a5c9-44c9-a3ef-651002555ff0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.360419 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5" (OuterVolumeSpecName: "persistence") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "pvc-768b6430-57c2-4601-b30e-a3b0639286e5". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.384076 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"698c36e9-5f87-4836-8660-aaceac669005","Type":"ContainerDied","Data":"57ad7e9e95603b9e00dced5aff567d0fff1bbfb9d96b8bfdb7074f711d80c274"} Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.384132 4808 scope.go:117] "RemoveContainer" containerID="d280bb8f394e232e2279b423416261e7f2f5d4ad76577ac87b19691f2c6abe5e" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.384317 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.424918 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-config-data" (OuterVolumeSpecName: "config-data") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.427459 4808 generic.go:334] "Generic (PLEG): container finished" podID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerID="a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807" exitCode=0 Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.427500 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59be2048-a5c9-44c9-a3ef-651002555ff0","Type":"ContainerDied","Data":"a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807"} Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.427527 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"59be2048-a5c9-44c9-a3ef-651002555ff0","Type":"ContainerDied","Data":"f86bb416640f1c93ce31ac0513d794573c83b4fcf30431f9c4619fd3c48ca73d"} Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.427532 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437273 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flvtj\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-kube-api-access-flvtj\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437637 4808 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/59be2048-a5c9-44c9-a3ef-651002555ff0-pod-info\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437653 4808 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/59be2048-a5c9-44c9-a3ef-651002555ff0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437683 4808 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") on node \"crc\" " Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437699 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437714 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437726 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.437739 4808 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.458088 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-server-conf" (OuterVolumeSpecName: "server-conf") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.485497 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.499096 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.514558 4808 scope.go:117] "RemoveContainer" containerID="19fb997acb847b4585d9f3a1732ebf382a63b29716209b27bb21be0c936a6430" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.540451 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: E0217 16:18:44.542966 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerName="rabbitmq" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.542995 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerName="rabbitmq" Feb 17 16:18:44 crc kubenswrapper[4808]: E0217 16:18:44.543025 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="698c36e9-5f87-4836-8660-aaceac669005" containerName="setup-container" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.543034 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="698c36e9-5f87-4836-8660-aaceac669005" containerName="setup-container" Feb 17 16:18:44 crc kubenswrapper[4808]: E0217 16:18:44.543045 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="698c36e9-5f87-4836-8660-aaceac669005" containerName="rabbitmq" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.543053 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="698c36e9-5f87-4836-8660-aaceac669005" containerName="rabbitmq" Feb 17 16:18:44 crc kubenswrapper[4808]: E0217 16:18:44.543068 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerName="setup-container" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.543074 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerName="setup-container" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.543305 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="698c36e9-5f87-4836-8660-aaceac669005" containerName="rabbitmq" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.543332 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" containerName="rabbitmq" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.544681 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.544897 4808 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.545263 4808 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/59be2048-a5c9-44c9-a3ef-651002555ff0-server-conf\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.545274 4808 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-768b6430-57c2-4601-b30e-a3b0639286e5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5") on node "crc" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.553977 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.553987 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.553987 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.554187 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gc9dp" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.554223 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.554845 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.556689 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.556894 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.570163 4808 scope.go:117] "RemoveContainer" containerID="a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.602695 4808 scope.go:117] "RemoveContainer" containerID="5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.627269 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "59be2048-a5c9-44c9-a3ef-651002555ff0" (UID: "59be2048-a5c9-44c9-a3ef-651002555ff0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.636824 4808 scope.go:117] "RemoveContainer" containerID="a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807" Feb 17 16:18:44 crc kubenswrapper[4808]: E0217 16:18:44.638115 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807\": container with ID starting with a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807 not found: ID does not exist" containerID="a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.638175 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807"} err="failed to get container status \"a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807\": rpc error: code = NotFound desc = could not find container \"a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807\": container with ID starting with a66e5c234068e929dfcc62adceb6ad71c707c8e45c67ae3fa19c099a1c7d0807 not found: ID does not exist" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.638202 4808 scope.go:117] "RemoveContainer" containerID="5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9" Feb 17 16:18:44 crc kubenswrapper[4808]: E0217 16:18:44.638613 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9\": container with ID starting with 5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9 not found: ID does not exist" containerID="5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.638650 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9"} err="failed to get container status \"5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9\": rpc error: code = NotFound desc = could not find container \"5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9\": container with ID starting with 5486e6dc5697e1e74b776b15f38831dacbc3e1b4bd9ce88391352b7167a44fe9 not found: ID does not exist" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647548 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647598 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-config-data\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647636 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647653 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/357e5513-bef7-45cc-b62f-072a161ccce3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647681 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647695 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szhc4\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-kube-api-access-szhc4\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647768 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647797 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647819 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/357e5513-bef7-45cc-b62f-072a161ccce3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.647854 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.648133 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.648340 4808 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/59be2048-a5c9-44c9-a3ef-651002555ff0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.648397 4808 reconciler_common.go:293] "Volume detached for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750058 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750113 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/357e5513-bef7-45cc-b62f-072a161ccce3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750159 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750183 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szhc4\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-kube-api-access-szhc4\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750265 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750314 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750343 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/357e5513-bef7-45cc-b62f-072a161ccce3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750396 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750488 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750524 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750536 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.750548 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-config-data\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.752255 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-config-data\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.752749 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.752954 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.753595 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/357e5513-bef7-45cc-b62f-072a161ccce3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.758551 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.758591 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/357e5513-bef7-45cc-b62f-072a161ccce3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.758916 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/357e5513-bef7-45cc-b62f-072a161ccce3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.759252 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.759292 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6f412b4a2036f29492410677330a9ca63ffe6d8a8c319c56d242ee67a4a97d25/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.759370 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.774767 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.789668 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szhc4\" (UniqueName: \"kubernetes.io/projected/357e5513-bef7-45cc-b62f-072a161ccce3-kube-api-access-szhc4\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.797826 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.830220 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.832516 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.835353 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.835532 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.835672 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.835823 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.841228 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gsb4q" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.841562 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.841771 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.849089 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-41460aca-532a-4a4a-9959-90e4e175e3d4\") pod \"rabbitmq-server-0\" (UID: \"357e5513-bef7-45cc-b62f-072a161ccce3\") " pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.867200 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.886454 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.960905 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961286 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65t8j\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-kube-api-access-65t8j\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961349 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961391 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961442 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961521 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961584 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961624 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961655 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961692 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:44 crc kubenswrapper[4808]: I0217 16:18:44.961723 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063352 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063424 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063461 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063510 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063539 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65t8j\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-kube-api-access-65t8j\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063602 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063636 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063681 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063771 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063820 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.063860 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.064802 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.064837 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.065235 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.065799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.066536 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.069034 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.069197 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.069383 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.073982 4808 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.074012 4808 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/be40d6772f21ead376a83ce27352b0ce535ee01ddc50414a5dc6453b6d9bcfec/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.075177 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.094230 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65t8j\" (UniqueName: \"kubernetes.io/projected/9da8d67e-00c6-4ba1-a08b-09c5653d93fd-kube-api-access-65t8j\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.136028 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-768b6430-57c2-4601-b30e-a3b0639286e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-768b6430-57c2-4601-b30e-a3b0639286e5\") pod \"rabbitmq-cell1-server-0\" (UID: \"9da8d67e-00c6-4ba1-a08b-09c5653d93fd\") " pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.156988 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.186345 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59be2048-a5c9-44c9-a3ef-651002555ff0" path="/var/lib/kubelet/pods/59be2048-a5c9-44c9-a3ef-651002555ff0/volumes" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.202591 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="698c36e9-5f87-4836-8660-aaceac669005" path="/var/lib/kubelet/pods/698c36e9-5f87-4836-8660-aaceac669005/volumes" Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.384625 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 17 16:18:45 crc kubenswrapper[4808]: W0217 16:18:45.421168 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod357e5513_bef7_45cc_b62f_072a161ccce3.slice/crio-3d7a4deea9f03cd17503d1ccf0226ec64ce9335540f665db854da6c3c7a8424d WatchSource:0}: Error finding container 3d7a4deea9f03cd17503d1ccf0226ec64ce9335540f665db854da6c3c7a8424d: Status 404 returned error can't find the container with id 3d7a4deea9f03cd17503d1ccf0226ec64ce9335540f665db854da6c3c7a8424d Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.463478 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"357e5513-bef7-45cc-b62f-072a161ccce3","Type":"ContainerStarted","Data":"3d7a4deea9f03cd17503d1ccf0226ec64ce9335540f665db854da6c3c7a8424d"} Feb 17 16:18:45 crc kubenswrapper[4808]: I0217 16:18:45.661975 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 17 16:18:45 crc kubenswrapper[4808]: W0217 16:18:45.666315 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9da8d67e_00c6_4ba1_a08b_09c5653d93fd.slice/crio-e100f3b82541b322c159ecac6f827481871a427c00d95b86434b34b9e4a7584d WatchSource:0}: Error finding container e100f3b82541b322c159ecac6f827481871a427c00d95b86434b34b9e4a7584d: Status 404 returned error can't find the container with id e100f3b82541b322c159ecac6f827481871a427c00d95b86434b34b9e4a7584d Feb 17 16:18:46 crc kubenswrapper[4808]: I0217 16:18:46.484811 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9da8d67e-00c6-4ba1-a08b-09c5653d93fd","Type":"ContainerStarted","Data":"e100f3b82541b322c159ecac6f827481871a427c00d95b86434b34b9e4a7584d"} Feb 17 16:18:47 crc kubenswrapper[4808]: E0217 16:18:47.286285 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:18:47 crc kubenswrapper[4808]: E0217 16:18:47.286688 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:18:47 crc kubenswrapper[4808]: E0217 16:18:47.286835 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:18:47 crc kubenswrapper[4808]: E0217 16:18:47.288196 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.375157 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-fnvwp"] Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.376807 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.383264 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.391980 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-fnvwp"] Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.426224 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.426382 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.426421 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.426542 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhb5g\" (UniqueName: \"kubernetes.io/projected/409792c8-f6ab-44df-a8d8-8c08bc58ed30-kube-api-access-lhb5g\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.426594 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.426636 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.426708 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-config\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.498768 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"357e5513-bef7-45cc-b62f-072a161ccce3","Type":"ContainerStarted","Data":"5ca487733509062335b917cabbb5c95c9c9189e5d3adc4142b7ced90b7a9fc87"} Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.528693 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.528771 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.528810 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.528890 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhb5g\" (UniqueName: \"kubernetes.io/projected/409792c8-f6ab-44df-a8d8-8c08bc58ed30-kube-api-access-lhb5g\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.528916 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.528967 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.529026 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-config\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.529891 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-config\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.530419 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.530953 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.531495 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.531515 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.531907 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.553376 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhb5g\" (UniqueName: \"kubernetes.io/projected/409792c8-f6ab-44df-a8d8-8c08bc58ed30-kube-api-access-lhb5g\") pod \"dnsmasq-dns-dbb88bf8c-fnvwp\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:47 crc kubenswrapper[4808]: I0217 16:18:47.696479 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:48 crc kubenswrapper[4808]: W0217 16:18:48.221419 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod409792c8_f6ab_44df_a8d8_8c08bc58ed30.slice/crio-c729358417ccc142b4f7228661c72ca3b99c7f68bec9bdccba36c4b7349760df WatchSource:0}: Error finding container c729358417ccc142b4f7228661c72ca3b99c7f68bec9bdccba36c4b7349760df: Status 404 returned error can't find the container with id c729358417ccc142b4f7228661c72ca3b99c7f68bec9bdccba36c4b7349760df Feb 17 16:18:48 crc kubenswrapper[4808]: I0217 16:18:48.221502 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-fnvwp"] Feb 17 16:18:48 crc kubenswrapper[4808]: I0217 16:18:48.514055 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9da8d67e-00c6-4ba1-a08b-09c5653d93fd","Type":"ContainerStarted","Data":"ae77a46583c3e8204d183609b0e2514ca4873bf349237e9718653cb5859c2857"} Feb 17 16:18:48 crc kubenswrapper[4808]: I0217 16:18:48.516134 4808 generic.go:334] "Generic (PLEG): container finished" podID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerID="20dc982f9bc098e9d7e98d8a7978009b4306c29975504eb93ecc3923345a7b57" exitCode=0 Feb 17 16:18:48 crc kubenswrapper[4808]: I0217 16:18:48.516179 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" event={"ID":"409792c8-f6ab-44df-a8d8-8c08bc58ed30","Type":"ContainerDied","Data":"20dc982f9bc098e9d7e98d8a7978009b4306c29975504eb93ecc3923345a7b57"} Feb 17 16:18:48 crc kubenswrapper[4808]: I0217 16:18:48.516223 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" event={"ID":"409792c8-f6ab-44df-a8d8-8c08bc58ed30","Type":"ContainerStarted","Data":"c729358417ccc142b4f7228661c72ca3b99c7f68bec9bdccba36c4b7349760df"} Feb 17 16:18:49 crc kubenswrapper[4808]: I0217 16:18:49.531425 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" event={"ID":"409792c8-f6ab-44df-a8d8-8c08bc58ed30","Type":"ContainerStarted","Data":"d89b6a5725897056022cd0fbaaed349b8829b23e00c04e7df288e7961d3651d1"} Feb 17 16:18:49 crc kubenswrapper[4808]: I0217 16:18:49.531732 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:49 crc kubenswrapper[4808]: I0217 16:18:49.554897 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" podStartSLOduration=2.554877162 podStartE2EDuration="2.554877162s" podCreationTimestamp="2026-02-17 16:18:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:18:49.550074954 +0000 UTC m=+1493.066434027" watchObservedRunningTime="2026-02-17 16:18:49.554877162 +0000 UTC m=+1493.071236235" Feb 17 16:18:51 crc kubenswrapper[4808]: I0217 16:18:51.592946 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:18:51 crc kubenswrapper[4808]: I0217 16:18:51.593257 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:18:55 crc kubenswrapper[4808]: I0217 16:18:55.159306 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 17 16:18:55 crc kubenswrapper[4808]: E0217 16:18:55.277365 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:18:55 crc kubenswrapper[4808]: E0217 16:18:55.277467 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:18:55 crc kubenswrapper[4808]: E0217 16:18:55.277717 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:18:55 crc kubenswrapper[4808]: E0217 16:18:55.279068 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:18:55 crc kubenswrapper[4808]: E0217 16:18:55.602930 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:18:57 crc kubenswrapper[4808]: I0217 16:18:57.697762 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:18:57 crc kubenswrapper[4808]: I0217 16:18:57.795487 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-kf4dn"] Feb 17 16:18:57 crc kubenswrapper[4808]: I0217 16:18:57.795709 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" podUID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerName="dnsmasq-dns" containerID="cri-o://726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65" gracePeriod=10 Feb 17 16:18:57 crc kubenswrapper[4808]: I0217 16:18:57.987780 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-mqnbz"] Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:57.993718 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.012627 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-mqnbz"] Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.086609 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-dns-svc\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.086694 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-config\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.086734 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s22fq\" (UniqueName: \"kubernetes.io/projected/3d16d4be-1ab3-4261-97a7-054701cf9dba-kube-api-access-s22fq\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.086849 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.086933 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.087011 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.087053 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.189236 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-dns-svc\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.189287 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-config\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.189310 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s22fq\" (UniqueName: \"kubernetes.io/projected/3d16d4be-1ab3-4261-97a7-054701cf9dba-kube-api-access-s22fq\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.189344 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.189402 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.189462 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.189494 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.191128 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.191238 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-config\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.191509 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.191865 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-dns-svc\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.191926 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.192121 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d16d4be-1ab3-4261-97a7-054701cf9dba-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.219914 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s22fq\" (UniqueName: \"kubernetes.io/projected/3d16d4be-1ab3-4261-97a7-054701cf9dba-kube-api-access-s22fq\") pod \"dnsmasq-dns-85f64749dc-mqnbz\" (UID: \"3d16d4be-1ab3-4261-97a7-054701cf9dba\") " pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.332495 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.477711 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.596725 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-config\") pod \"236a76a9-e108-4cb9-b76d-825e33bdad41\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.596806 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-svc\") pod \"236a76a9-e108-4cb9-b76d-825e33bdad41\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.596897 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-nb\") pod \"236a76a9-e108-4cb9-b76d-825e33bdad41\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.597007 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-swift-storage-0\") pod \"236a76a9-e108-4cb9-b76d-825e33bdad41\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.597041 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxgsc\" (UniqueName: \"kubernetes.io/projected/236a76a9-e108-4cb9-b76d-825e33bdad41-kube-api-access-fxgsc\") pod \"236a76a9-e108-4cb9-b76d-825e33bdad41\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.597079 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-sb\") pod \"236a76a9-e108-4cb9-b76d-825e33bdad41\" (UID: \"236a76a9-e108-4cb9-b76d-825e33bdad41\") " Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.602055 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/236a76a9-e108-4cb9-b76d-825e33bdad41-kube-api-access-fxgsc" (OuterVolumeSpecName: "kube-api-access-fxgsc") pod "236a76a9-e108-4cb9-b76d-825e33bdad41" (UID: "236a76a9-e108-4cb9-b76d-825e33bdad41"). InnerVolumeSpecName "kube-api-access-fxgsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.648058 4808 generic.go:334] "Generic (PLEG): container finished" podID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerID="726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65" exitCode=0 Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.648113 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" event={"ID":"236a76a9-e108-4cb9-b76d-825e33bdad41","Type":"ContainerDied","Data":"726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65"} Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.648148 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" event={"ID":"236a76a9-e108-4cb9-b76d-825e33bdad41","Type":"ContainerDied","Data":"8fe947d0790a922756d78327f84cf510a97c6419a7ba4cf6d5a3665a8b91aebe"} Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.648169 4808 scope.go:117] "RemoveContainer" containerID="726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.648358 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-kf4dn" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.663632 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "236a76a9-e108-4cb9-b76d-825e33bdad41" (UID: "236a76a9-e108-4cb9-b76d-825e33bdad41"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.665419 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-config" (OuterVolumeSpecName: "config") pod "236a76a9-e108-4cb9-b76d-825e33bdad41" (UID: "236a76a9-e108-4cb9-b76d-825e33bdad41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.667096 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "236a76a9-e108-4cb9-b76d-825e33bdad41" (UID: "236a76a9-e108-4cb9-b76d-825e33bdad41"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.672308 4808 scope.go:117] "RemoveContainer" containerID="b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.676226 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "236a76a9-e108-4cb9-b76d-825e33bdad41" (UID: "236a76a9-e108-4cb9-b76d-825e33bdad41"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.688283 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "236a76a9-e108-4cb9-b76d-825e33bdad41" (UID: "236a76a9-e108-4cb9-b76d-825e33bdad41"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.693369 4808 scope.go:117] "RemoveContainer" containerID="726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65" Feb 17 16:18:58 crc kubenswrapper[4808]: E0217 16:18:58.693713 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65\": container with ID starting with 726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65 not found: ID does not exist" containerID="726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.693748 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65"} err="failed to get container status \"726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65\": rpc error: code = NotFound desc = could not find container \"726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65\": container with ID starting with 726982a5e02918c4f9048d79766ece8c9bd2f3298827c5b5c0acd8c07d834e65 not found: ID does not exist" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.693769 4808 scope.go:117] "RemoveContainer" containerID="b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e" Feb 17 16:18:58 crc kubenswrapper[4808]: E0217 16:18:58.693955 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e\": container with ID starting with b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e not found: ID does not exist" containerID="b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.693979 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e"} err="failed to get container status \"b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e\": rpc error: code = NotFound desc = could not find container \"b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e\": container with ID starting with b1830bc8bbf4b2312521eeaea4fe1cc258bc9a13a7a1aef82477a26dccb0e21e not found: ID does not exist" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.699767 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxgsc\" (UniqueName: \"kubernetes.io/projected/236a76a9-e108-4cb9-b76d-825e33bdad41-kube-api-access-fxgsc\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.699790 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.699799 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.699811 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.699818 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.699827 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/236a76a9-e108-4cb9-b76d-825e33bdad41-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:18:58 crc kubenswrapper[4808]: I0217 16:18:58.835990 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-mqnbz"] Feb 17 16:18:58 crc kubenswrapper[4808]: W0217 16:18:58.838420 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d16d4be_1ab3_4261_97a7_054701cf9dba.slice/crio-4f8abf5a3106c8db16366268419f6ed688fd3a9470de416f1149409e30f54637 WatchSource:0}: Error finding container 4f8abf5a3106c8db16366268419f6ed688fd3a9470de416f1149409e30f54637: Status 404 returned error can't find the container with id 4f8abf5a3106c8db16366268419f6ed688fd3a9470de416f1149409e30f54637 Feb 17 16:18:59 crc kubenswrapper[4808]: I0217 16:18:59.088953 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-kf4dn"] Feb 17 16:18:59 crc kubenswrapper[4808]: I0217 16:18:59.097515 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-kf4dn"] Feb 17 16:18:59 crc kubenswrapper[4808]: I0217 16:18:59.158484 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="236a76a9-e108-4cb9-b76d-825e33bdad41" path="/var/lib/kubelet/pods/236a76a9-e108-4cb9-b76d-825e33bdad41/volumes" Feb 17 16:18:59 crc kubenswrapper[4808]: I0217 16:18:59.665888 4808 generic.go:334] "Generic (PLEG): container finished" podID="3d16d4be-1ab3-4261-97a7-054701cf9dba" containerID="9a7fc5641b68862f1d3e76b5ba9a8b27b392b25b5b1b1869bd5782ffe16d7cfb" exitCode=0 Feb 17 16:18:59 crc kubenswrapper[4808]: I0217 16:18:59.665940 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" event={"ID":"3d16d4be-1ab3-4261-97a7-054701cf9dba","Type":"ContainerDied","Data":"9a7fc5641b68862f1d3e76b5ba9a8b27b392b25b5b1b1869bd5782ffe16d7cfb"} Feb 17 16:18:59 crc kubenswrapper[4808]: I0217 16:18:59.666520 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" event={"ID":"3d16d4be-1ab3-4261-97a7-054701cf9dba","Type":"ContainerStarted","Data":"4f8abf5a3106c8db16366268419f6ed688fd3a9470de416f1149409e30f54637"} Feb 17 16:19:00 crc kubenswrapper[4808]: I0217 16:19:00.678520 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" event={"ID":"3d16d4be-1ab3-4261-97a7-054701cf9dba","Type":"ContainerStarted","Data":"016ca0b56ec9c54e7a9608d389c503625fd7451d943ef0dd7f826bf37802c0bf"} Feb 17 16:19:00 crc kubenswrapper[4808]: I0217 16:19:00.678972 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:19:00 crc kubenswrapper[4808]: I0217 16:19:00.716403 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" podStartSLOduration=3.716376855 podStartE2EDuration="3.716376855s" podCreationTimestamp="2026-02-17 16:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:00.701803606 +0000 UTC m=+1504.218162689" watchObservedRunningTime="2026-02-17 16:19:00.716376855 +0000 UTC m=+1504.232735948" Feb 17 16:19:02 crc kubenswrapper[4808]: E0217 16:19:02.148907 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:19:08 crc kubenswrapper[4808]: E0217 16:19:08.148171 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:19:08 crc kubenswrapper[4808]: I0217 16:19:08.333708 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85f64749dc-mqnbz" Feb 17 16:19:08 crc kubenswrapper[4808]: I0217 16:19:08.398064 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-fnvwp"] Feb 17 16:19:08 crc kubenswrapper[4808]: I0217 16:19:08.398388 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" podUID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerName="dnsmasq-dns" containerID="cri-o://d89b6a5725897056022cd0fbaaed349b8829b23e00c04e7df288e7961d3651d1" gracePeriod=10 Feb 17 16:19:08 crc kubenswrapper[4808]: I0217 16:19:08.778696 4808 generic.go:334] "Generic (PLEG): container finished" podID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerID="d89b6a5725897056022cd0fbaaed349b8829b23e00c04e7df288e7961d3651d1" exitCode=0 Feb 17 16:19:08 crc kubenswrapper[4808]: I0217 16:19:08.778767 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" event={"ID":"409792c8-f6ab-44df-a8d8-8c08bc58ed30","Type":"ContainerDied","Data":"d89b6a5725897056022cd0fbaaed349b8829b23e00c04e7df288e7961d3651d1"} Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.002763 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.050743 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-nb\") pod \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.050785 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-config\") pod \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.050835 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-svc\") pod \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.050900 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-openstack-edpm-ipam\") pod \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.050922 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-sb\") pod \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.051070 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-swift-storage-0\") pod \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.051142 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhb5g\" (UniqueName: \"kubernetes.io/projected/409792c8-f6ab-44df-a8d8-8c08bc58ed30-kube-api-access-lhb5g\") pod \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\" (UID: \"409792c8-f6ab-44df-a8d8-8c08bc58ed30\") " Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.070000 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/409792c8-f6ab-44df-a8d8-8c08bc58ed30-kube-api-access-lhb5g" (OuterVolumeSpecName: "kube-api-access-lhb5g") pod "409792c8-f6ab-44df-a8d8-8c08bc58ed30" (UID: "409792c8-f6ab-44df-a8d8-8c08bc58ed30"). InnerVolumeSpecName "kube-api-access-lhb5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.119274 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "409792c8-f6ab-44df-a8d8-8c08bc58ed30" (UID: "409792c8-f6ab-44df-a8d8-8c08bc58ed30"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.125996 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "409792c8-f6ab-44df-a8d8-8c08bc58ed30" (UID: "409792c8-f6ab-44df-a8d8-8c08bc58ed30"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.132995 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "409792c8-f6ab-44df-a8d8-8c08bc58ed30" (UID: "409792c8-f6ab-44df-a8d8-8c08bc58ed30"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.134985 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "409792c8-f6ab-44df-a8d8-8c08bc58ed30" (UID: "409792c8-f6ab-44df-a8d8-8c08bc58ed30"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.142866 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "409792c8-f6ab-44df-a8d8-8c08bc58ed30" (UID: "409792c8-f6ab-44df-a8d8-8c08bc58ed30"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.146920 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-config" (OuterVolumeSpecName: "config") pod "409792c8-f6ab-44df-a8d8-8c08bc58ed30" (UID: "409792c8-f6ab-44df-a8d8-8c08bc58ed30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.152434 4808 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.152461 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhb5g\" (UniqueName: \"kubernetes.io/projected/409792c8-f6ab-44df-a8d8-8c08bc58ed30-kube-api-access-lhb5g\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.152471 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.152483 4808 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-config\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.152492 4808 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.152500 4808 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.152510 4808 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/409792c8-f6ab-44df-a8d8-8c08bc58ed30-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.797927 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" event={"ID":"409792c8-f6ab-44df-a8d8-8c08bc58ed30","Type":"ContainerDied","Data":"c729358417ccc142b4f7228661c72ca3b99c7f68bec9bdccba36c4b7349760df"} Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.798004 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-fnvwp" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.798024 4808 scope.go:117] "RemoveContainer" containerID="d89b6a5725897056022cd0fbaaed349b8829b23e00c04e7df288e7961d3651d1" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.834428 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-fnvwp"] Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.844232 4808 scope.go:117] "RemoveContainer" containerID="20dc982f9bc098e9d7e98d8a7978009b4306c29975504eb93ecc3923345a7b57" Feb 17 16:19:09 crc kubenswrapper[4808]: I0217 16:19:09.849324 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-fnvwp"] Feb 17 16:19:11 crc kubenswrapper[4808]: I0217 16:19:11.159462 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" path="/var/lib/kubelet/pods/409792c8-f6ab-44df-a8d8-8c08bc58ed30/volumes" Feb 17 16:19:14 crc kubenswrapper[4808]: E0217 16:19:14.245267 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:19:14 crc kubenswrapper[4808]: E0217 16:19:14.245992 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:19:14 crc kubenswrapper[4808]: E0217 16:19:14.246152 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:19:14 crc kubenswrapper[4808]: E0217 16:19:14.247427 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:19:19 crc kubenswrapper[4808]: I0217 16:19:19.923381 4808 generic.go:334] "Generic (PLEG): container finished" podID="357e5513-bef7-45cc-b62f-072a161ccce3" containerID="5ca487733509062335b917cabbb5c95c9c9189e5d3adc4142b7ced90b7a9fc87" exitCode=0 Feb 17 16:19:19 crc kubenswrapper[4808]: I0217 16:19:19.923959 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"357e5513-bef7-45cc-b62f-072a161ccce3","Type":"ContainerDied","Data":"5ca487733509062335b917cabbb5c95c9c9189e5d3adc4142b7ced90b7a9fc87"} Feb 17 16:19:19 crc kubenswrapper[4808]: I0217 16:19:19.931626 4808 generic.go:334] "Generic (PLEG): container finished" podID="9da8d67e-00c6-4ba1-a08b-09c5653d93fd" containerID="ae77a46583c3e8204d183609b0e2514ca4873bf349237e9718653cb5859c2857" exitCode=0 Feb 17 16:19:19 crc kubenswrapper[4808]: I0217 16:19:19.931681 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9da8d67e-00c6-4ba1-a08b-09c5653d93fd","Type":"ContainerDied","Data":"ae77a46583c3e8204d183609b0e2514ca4873bf349237e9718653cb5859c2857"} Feb 17 16:19:20 crc kubenswrapper[4808]: I0217 16:19:20.943839 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"357e5513-bef7-45cc-b62f-072a161ccce3","Type":"ContainerStarted","Data":"904f6f9146b129e8fb603f170c4eb5fe656441b9f59b4dd19f9f8151ed9b9506"} Feb 17 16:19:20 crc kubenswrapper[4808]: I0217 16:19:20.944265 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 17 16:19:20 crc kubenswrapper[4808]: I0217 16:19:20.945452 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"9da8d67e-00c6-4ba1-a08b-09c5653d93fd","Type":"ContainerStarted","Data":"7100610f263d6b00c7051e727dbccb6f0db8d39cdc23ff03e93b119fa0586576"} Feb 17 16:19:20 crc kubenswrapper[4808]: I0217 16:19:20.945644 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:19:20 crc kubenswrapper[4808]: I0217 16:19:20.967625 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.967599825 podStartE2EDuration="36.967599825s" podCreationTimestamp="2026-02-17 16:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:20.96294338 +0000 UTC m=+1524.479302443" watchObservedRunningTime="2026-02-17 16:19:20.967599825 +0000 UTC m=+1524.483958918" Feb 17 16:19:20 crc kubenswrapper[4808]: I0217 16:19:20.984947 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.984899916 podStartE2EDuration="36.984899916s" podCreationTimestamp="2026-02-17 16:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:19:20.983289573 +0000 UTC m=+1524.499648666" watchObservedRunningTime="2026-02-17 16:19:20.984899916 +0000 UTC m=+1524.501258989" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.592713 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.592981 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.744220 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl"] Feb 17 16:19:21 crc kubenswrapper[4808]: E0217 16:19:21.744876 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerName="init" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.744953 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerName="init" Feb 17 16:19:21 crc kubenswrapper[4808]: E0217 16:19:21.745336 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerName="dnsmasq-dns" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.745439 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerName="dnsmasq-dns" Feb 17 16:19:21 crc kubenswrapper[4808]: E0217 16:19:21.745532 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerName="dnsmasq-dns" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.745600 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerName="dnsmasq-dns" Feb 17 16:19:21 crc kubenswrapper[4808]: E0217 16:19:21.745661 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerName="init" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.745714 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerName="init" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.745962 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="236a76a9-e108-4cb9-b76d-825e33bdad41" containerName="dnsmasq-dns" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.746023 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="409792c8-f6ab-44df-a8d8-8c08bc58ed30" containerName="dnsmasq-dns" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.746762 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.748353 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.748814 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.748987 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.750093 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.791599 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl"] Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.821350 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.821615 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.821865 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.822079 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwxgm\" (UniqueName: \"kubernetes.io/projected/785a49f6-7a06-4787-a829-fc9956730c15-kube-api-access-nwxgm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.924623 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.924686 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.924743 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwxgm\" (UniqueName: \"kubernetes.io/projected/785a49f6-7a06-4787-a829-fc9956730c15-kube-api-access-nwxgm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.924800 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.934414 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.934538 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.934879 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:21 crc kubenswrapper[4808]: I0217 16:19:21.955192 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwxgm\" (UniqueName: \"kubernetes.io/projected/785a49f6-7a06-4787-a829-fc9956730c15-kube-api-access-nwxgm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:22 crc kubenswrapper[4808]: I0217 16:19:22.066826 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:22 crc kubenswrapper[4808]: E0217 16:19:22.270185 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:19:22 crc kubenswrapper[4808]: E0217 16:19:22.270525 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:19:22 crc kubenswrapper[4808]: E0217 16:19:22.270711 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:19:22 crc kubenswrapper[4808]: E0217 16:19:22.271828 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:19:22 crc kubenswrapper[4808]: I0217 16:19:22.660097 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl"] Feb 17 16:19:22 crc kubenswrapper[4808]: W0217 16:19:22.660388 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod785a49f6_7a06_4787_a829_fc9956730c15.slice/crio-7259b6afb6b29cded89d16ab2e57b5467e310105433978c0136192dfa9605c37 WatchSource:0}: Error finding container 7259b6afb6b29cded89d16ab2e57b5467e310105433978c0136192dfa9605c37: Status 404 returned error can't find the container with id 7259b6afb6b29cded89d16ab2e57b5467e310105433978c0136192dfa9605c37 Feb 17 16:19:22 crc kubenswrapper[4808]: I0217 16:19:22.963562 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" event={"ID":"785a49f6-7a06-4787-a829-fc9956730c15","Type":"ContainerStarted","Data":"7259b6afb6b29cded89d16ab2e57b5467e310105433978c0136192dfa9605c37"} Feb 17 16:19:29 crc kubenswrapper[4808]: E0217 16:19:29.147074 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:19:34 crc kubenswrapper[4808]: I0217 16:19:34.892900 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 17 16:19:35 crc kubenswrapper[4808]: I0217 16:19:35.165171 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 17 16:19:36 crc kubenswrapper[4808]: E0217 16:19:36.152324 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:19:36 crc kubenswrapper[4808]: I0217 16:19:36.163054 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" event={"ID":"785a49f6-7a06-4787-a829-fc9956730c15","Type":"ContainerStarted","Data":"3b8a8a2382ccfbae19a06c099cc5a82f7309486b57a54008ca868209da2f44e5"} Feb 17 16:19:36 crc kubenswrapper[4808]: I0217 16:19:36.229774 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" podStartSLOduration=3.10474968 podStartE2EDuration="15.229748312s" podCreationTimestamp="2026-02-17 16:19:21 +0000 UTC" firstStartedPulling="2026-02-17 16:19:22.662692368 +0000 UTC m=+1526.179051441" lastFinishedPulling="2026-02-17 16:19:34.78769096 +0000 UTC m=+1538.304050073" observedRunningTime="2026-02-17 16:19:36.204329544 +0000 UTC m=+1539.720688627" watchObservedRunningTime="2026-02-17 16:19:36.229748312 +0000 UTC m=+1539.746107395" Feb 17 16:19:41 crc kubenswrapper[4808]: E0217 16:19:41.150091 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:19:48 crc kubenswrapper[4808]: I0217 16:19:48.320778 4808 generic.go:334] "Generic (PLEG): container finished" podID="785a49f6-7a06-4787-a829-fc9956730c15" containerID="3b8a8a2382ccfbae19a06c099cc5a82f7309486b57a54008ca868209da2f44e5" exitCode=0 Feb 17 16:19:48 crc kubenswrapper[4808]: I0217 16:19:48.320866 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" event={"ID":"785a49f6-7a06-4787-a829-fc9956730c15","Type":"ContainerDied","Data":"3b8a8a2382ccfbae19a06c099cc5a82f7309486b57a54008ca868209da2f44e5"} Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.025509 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.102133 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-ssh-key-openstack-edpm-ipam\") pod \"785a49f6-7a06-4787-a829-fc9956730c15\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.102301 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwxgm\" (UniqueName: \"kubernetes.io/projected/785a49f6-7a06-4787-a829-fc9956730c15-kube-api-access-nwxgm\") pod \"785a49f6-7a06-4787-a829-fc9956730c15\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.102360 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-repo-setup-combined-ca-bundle\") pod \"785a49f6-7a06-4787-a829-fc9956730c15\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.102418 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-inventory\") pod \"785a49f6-7a06-4787-a829-fc9956730c15\" (UID: \"785a49f6-7a06-4787-a829-fc9956730c15\") " Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.108442 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "785a49f6-7a06-4787-a829-fc9956730c15" (UID: "785a49f6-7a06-4787-a829-fc9956730c15"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.110251 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785a49f6-7a06-4787-a829-fc9956730c15-kube-api-access-nwxgm" (OuterVolumeSpecName: "kube-api-access-nwxgm") pod "785a49f6-7a06-4787-a829-fc9956730c15" (UID: "785a49f6-7a06-4787-a829-fc9956730c15"). InnerVolumeSpecName "kube-api-access-nwxgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.132506 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-inventory" (OuterVolumeSpecName: "inventory") pod "785a49f6-7a06-4787-a829-fc9956730c15" (UID: "785a49f6-7a06-4787-a829-fc9956730c15"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.155340 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "785a49f6-7a06-4787-a829-fc9956730c15" (UID: "785a49f6-7a06-4787-a829-fc9956730c15"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.205215 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwxgm\" (UniqueName: \"kubernetes.io/projected/785a49f6-7a06-4787-a829-fc9956730c15-kube-api-access-nwxgm\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.205987 4808 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.206000 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.206011 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/785a49f6-7a06-4787-a829-fc9956730c15-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.348440 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" event={"ID":"785a49f6-7a06-4787-a829-fc9956730c15","Type":"ContainerDied","Data":"7259b6afb6b29cded89d16ab2e57b5467e310105433978c0136192dfa9605c37"} Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.348485 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7259b6afb6b29cded89d16ab2e57b5467e310105433978c0136192dfa9605c37" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.348507 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.469732 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq"] Feb 17 16:19:50 crc kubenswrapper[4808]: E0217 16:19:50.470524 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="785a49f6-7a06-4787-a829-fc9956730c15" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.470545 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="785a49f6-7a06-4787-a829-fc9956730c15" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.470830 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="785a49f6-7a06-4787-a829-fc9956730c15" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.471761 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.474564 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.474882 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.475296 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.475391 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.484366 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq"] Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.512609 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.512718 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gttpl\" (UniqueName: \"kubernetes.io/projected/404291d9-a172-4a9a-8a0e-2f2514ce06ff-kube-api-access-gttpl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.512748 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.614628 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gttpl\" (UniqueName: \"kubernetes.io/projected/404291d9-a172-4a9a-8a0e-2f2514ce06ff-kube-api-access-gttpl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.614678 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.614792 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.623902 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.628009 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.633997 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gttpl\" (UniqueName: \"kubernetes.io/projected/404291d9-a172-4a9a-8a0e-2f2514ce06ff-kube-api-access-gttpl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-8pfvq\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:50 crc kubenswrapper[4808]: I0217 16:19:50.822528 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:51 crc kubenswrapper[4808]: E0217 16:19:51.148761 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:19:51 crc kubenswrapper[4808]: I0217 16:19:51.380695 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq"] Feb 17 16:19:51 crc kubenswrapper[4808]: I0217 16:19:51.592006 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:19:51 crc kubenswrapper[4808]: I0217 16:19:51.592082 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:19:51 crc kubenswrapper[4808]: I0217 16:19:51.592128 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:19:51 crc kubenswrapper[4808]: I0217 16:19:51.592912 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:19:51 crc kubenswrapper[4808]: I0217 16:19:51.592975 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" gracePeriod=600 Feb 17 16:19:51 crc kubenswrapper[4808]: E0217 16:19:51.719315 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:19:52 crc kubenswrapper[4808]: I0217 16:19:52.374058 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" exitCode=0 Feb 17 16:19:52 crc kubenswrapper[4808]: I0217 16:19:52.374101 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d"} Feb 17 16:19:52 crc kubenswrapper[4808]: I0217 16:19:52.374543 4808 scope.go:117] "RemoveContainer" containerID="34e69d9ce6b54cc95e099ff98c49ef8661be9798a1b5f5a56fc276247e76ba49" Feb 17 16:19:52 crc kubenswrapper[4808]: I0217 16:19:52.375426 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:19:52 crc kubenswrapper[4808]: E0217 16:19:52.376137 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:19:52 crc kubenswrapper[4808]: I0217 16:19:52.376817 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" event={"ID":"404291d9-a172-4a9a-8a0e-2f2514ce06ff","Type":"ContainerStarted","Data":"85c281cb387270bbfc86bf45957a2a330927a4e4a3dc86d981d5d1496be3a77c"} Feb 17 16:19:52 crc kubenswrapper[4808]: I0217 16:19:52.376853 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" event={"ID":"404291d9-a172-4a9a-8a0e-2f2514ce06ff","Type":"ContainerStarted","Data":"3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac"} Feb 17 16:19:52 crc kubenswrapper[4808]: I0217 16:19:52.409950 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" podStartSLOduration=1.735716215 podStartE2EDuration="2.409935537s" podCreationTimestamp="2026-02-17 16:19:50 +0000 UTC" firstStartedPulling="2026-02-17 16:19:51.38102161 +0000 UTC m=+1554.897380703" lastFinishedPulling="2026-02-17 16:19:52.055240952 +0000 UTC m=+1555.571600025" observedRunningTime="2026-02-17 16:19:52.406781523 +0000 UTC m=+1555.923140586" watchObservedRunningTime="2026-02-17 16:19:52.409935537 +0000 UTC m=+1555.926294600" Feb 17 16:19:55 crc kubenswrapper[4808]: E0217 16:19:55.275812 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:19:55 crc kubenswrapper[4808]: E0217 16:19:55.276255 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:19:55 crc kubenswrapper[4808]: E0217 16:19:55.276404 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:19:55 crc kubenswrapper[4808]: E0217 16:19:55.277707 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:19:55 crc kubenswrapper[4808]: I0217 16:19:55.414027 4808 generic.go:334] "Generic (PLEG): container finished" podID="404291d9-a172-4a9a-8a0e-2f2514ce06ff" containerID="85c281cb387270bbfc86bf45957a2a330927a4e4a3dc86d981d5d1496be3a77c" exitCode=0 Feb 17 16:19:55 crc kubenswrapper[4808]: I0217 16:19:55.414080 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" event={"ID":"404291d9-a172-4a9a-8a0e-2f2514ce06ff","Type":"ContainerDied","Data":"85c281cb387270bbfc86bf45957a2a330927a4e4a3dc86d981d5d1496be3a77c"} Feb 17 16:19:56 crc kubenswrapper[4808]: I0217 16:19:56.998411 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.086658 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-inventory\") pod \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.086725 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-ssh-key-openstack-edpm-ipam\") pod \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.086978 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gttpl\" (UniqueName: \"kubernetes.io/projected/404291d9-a172-4a9a-8a0e-2f2514ce06ff-kube-api-access-gttpl\") pod \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\" (UID: \"404291d9-a172-4a9a-8a0e-2f2514ce06ff\") " Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.092235 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/404291d9-a172-4a9a-8a0e-2f2514ce06ff-kube-api-access-gttpl" (OuterVolumeSpecName: "kube-api-access-gttpl") pod "404291d9-a172-4a9a-8a0e-2f2514ce06ff" (UID: "404291d9-a172-4a9a-8a0e-2f2514ce06ff"). InnerVolumeSpecName "kube-api-access-gttpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.121268 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-inventory" (OuterVolumeSpecName: "inventory") pod "404291d9-a172-4a9a-8a0e-2f2514ce06ff" (UID: "404291d9-a172-4a9a-8a0e-2f2514ce06ff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.123062 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "404291d9-a172-4a9a-8a0e-2f2514ce06ff" (UID: "404291d9-a172-4a9a-8a0e-2f2514ce06ff"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.189457 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gttpl\" (UniqueName: \"kubernetes.io/projected/404291d9-a172-4a9a-8a0e-2f2514ce06ff-kube-api-access-gttpl\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.189749 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.189892 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/404291d9-a172-4a9a-8a0e-2f2514ce06ff-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.439244 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" event={"ID":"404291d9-a172-4a9a-8a0e-2f2514ce06ff","Type":"ContainerDied","Data":"3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac"} Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.439286 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.439351 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-8pfvq" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.523320 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g"] Feb 17 16:19:57 crc kubenswrapper[4808]: E0217 16:19:57.525213 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="404291d9-a172-4a9a-8a0e-2f2514ce06ff" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.525240 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="404291d9-a172-4a9a-8a0e-2f2514ce06ff" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.525488 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="404291d9-a172-4a9a-8a0e-2f2514ce06ff" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.527263 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.529205 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.530053 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.530343 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.530629 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.543844 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g"] Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.597234 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.597428 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dsr2\" (UniqueName: \"kubernetes.io/projected/e4a30af7-342e-49c0-8e89-c38f11b7cc63-kube-api-access-9dsr2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.597720 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.597857 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.699642 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.699729 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.699790 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dsr2\" (UniqueName: \"kubernetes.io/projected/e4a30af7-342e-49c0-8e89-c38f11b7cc63-kube-api-access-9dsr2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.699872 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.704341 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.705451 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.710108 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.723238 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dsr2\" (UniqueName: \"kubernetes.io/projected/e4a30af7-342e-49c0-8e89-c38f11b7cc63-kube-api-access-9dsr2\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:57 crc kubenswrapper[4808]: I0217 16:19:57.867872 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:19:58 crc kubenswrapper[4808]: I0217 16:19:58.468545 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g"] Feb 17 16:19:58 crc kubenswrapper[4808]: W0217 16:19:58.479120 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4a30af7_342e_49c0_8e89_c38f11b7cc63.slice/crio-ec611864a405eeef1eea8b1792d33b647fe4a37506f5f9ad7454e52f00a3b863 WatchSource:0}: Error finding container ec611864a405eeef1eea8b1792d33b647fe4a37506f5f9ad7454e52f00a3b863: Status 404 returned error can't find the container with id ec611864a405eeef1eea8b1792d33b647fe4a37506f5f9ad7454e52f00a3b863 Feb 17 16:19:59 crc kubenswrapper[4808]: I0217 16:19:59.462689 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" event={"ID":"e4a30af7-342e-49c0-8e89-c38f11b7cc63","Type":"ContainerStarted","Data":"71c91d6451b64c7f7e3bd20b7f8ce8d6da0a6dbf093d38be3cac5d1529528868"} Feb 17 16:19:59 crc kubenswrapper[4808]: I0217 16:19:59.463011 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" event={"ID":"e4a30af7-342e-49c0-8e89-c38f11b7cc63","Type":"ContainerStarted","Data":"ec611864a405eeef1eea8b1792d33b647fe4a37506f5f9ad7454e52f00a3b863"} Feb 17 16:19:59 crc kubenswrapper[4808]: I0217 16:19:59.487790 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" podStartSLOduration=2.09250356 podStartE2EDuration="2.487762067s" podCreationTimestamp="2026-02-17 16:19:57 +0000 UTC" firstStartedPulling="2026-02-17 16:19:58.486982552 +0000 UTC m=+1562.003341625" lastFinishedPulling="2026-02-17 16:19:58.882241059 +0000 UTC m=+1562.398600132" observedRunningTime="2026-02-17 16:19:59.480854503 +0000 UTC m=+1562.997213576" watchObservedRunningTime="2026-02-17 16:19:59.487762067 +0000 UTC m=+1563.004121150" Feb 17 16:20:02 crc kubenswrapper[4808]: E0217 16:20:02.149096 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:20:03 crc kubenswrapper[4808]: I0217 16:20:03.145868 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:20:03 crc kubenswrapper[4808]: E0217 16:20:03.146376 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:20:06 crc kubenswrapper[4808]: E0217 16:20:06.187403 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice/crio-3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:07 crc kubenswrapper[4808]: E0217 16:20:07.154718 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:20:14 crc kubenswrapper[4808]: I0217 16:20:14.578339 4808 scope.go:117] "RemoveContainer" containerID="393504cd886f25701edec85a116ae5e2c966bd8cc6f3213385ba9edc2a2c6ec3" Feb 17 16:20:14 crc kubenswrapper[4808]: I0217 16:20:14.625348 4808 scope.go:117] "RemoveContainer" containerID="7aea08d602941315a47910cfb8dca2a1ac4425726486c35b99c77739c12a5b14" Feb 17 16:20:14 crc kubenswrapper[4808]: I0217 16:20:14.687921 4808 scope.go:117] "RemoveContainer" containerID="b60fbde46c6075a50ace4cd1663669a692d98861f29087030c80fceb181a0f6f" Feb 17 16:20:14 crc kubenswrapper[4808]: I0217 16:20:14.732478 4808 scope.go:117] "RemoveContainer" containerID="aa9c642e8bb62ae5d91fda2bdf24643392c75706213200f28e2d16c8e6a33f94" Feb 17 16:20:14 crc kubenswrapper[4808]: I0217 16:20:14.774761 4808 scope.go:117] "RemoveContainer" containerID="8e5f6f7a728607504ca216d406d1d8a535d1573f6c6ba0a924dbe399f84dae18" Feb 17 16:20:15 crc kubenswrapper[4808]: I0217 16:20:15.146451 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:20:15 crc kubenswrapper[4808]: E0217 16:20:15.146917 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:20:15 crc kubenswrapper[4808]: E0217 16:20:15.246972 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:20:15 crc kubenswrapper[4808]: E0217 16:20:15.247306 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:20:15 crc kubenswrapper[4808]: E0217 16:20:15.247490 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:20:15 crc kubenswrapper[4808]: E0217 16:20:15.248771 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:20:16 crc kubenswrapper[4808]: E0217 16:20:16.443833 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice/crio-3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:19 crc kubenswrapper[4808]: E0217 16:20:19.147869 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:20:26 crc kubenswrapper[4808]: E0217 16:20:26.777123 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice/crio-3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:29 crc kubenswrapper[4808]: I0217 16:20:29.146684 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:20:29 crc kubenswrapper[4808]: E0217 16:20:29.147696 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:20:29 crc kubenswrapper[4808]: E0217 16:20:29.149203 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:20:32 crc kubenswrapper[4808]: E0217 16:20:32.150685 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:20:37 crc kubenswrapper[4808]: E0217 16:20:37.003967 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice/crio-3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:40 crc kubenswrapper[4808]: I0217 16:20:40.146005 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:20:40 crc kubenswrapper[4808]: E0217 16:20:40.147641 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:20:43 crc kubenswrapper[4808]: E0217 16:20:43.149292 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:20:44 crc kubenswrapper[4808]: E0217 16:20:44.148294 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:20:47 crc kubenswrapper[4808]: E0217 16:20:47.304827 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice/crio-3bf03b7ceb2c96ff334dc08314d1c2d44c88e47d3e97f45d681b7dbbed8227ac\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod404291d9_a172_4a9a_8a0e_2f2514ce06ff.slice\": RecentStats: unable to find data in memory cache]" Feb 17 16:20:54 crc kubenswrapper[4808]: I0217 16:20:54.145674 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:20:54 crc kubenswrapper[4808]: E0217 16:20:54.146428 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:20:56 crc kubenswrapper[4808]: E0217 16:20:56.148937 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:20:58 crc kubenswrapper[4808]: E0217 16:20:58.147954 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.019060 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7kpkn"] Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.036920 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.074888 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kpkn"] Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.217330 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-catalog-content\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.217678 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-utilities\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.217749 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh9qc\" (UniqueName: \"kubernetes.io/projected/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-kube-api-access-mh9qc\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.320063 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-catalog-content\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.320183 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-utilities\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.320320 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh9qc\" (UniqueName: \"kubernetes.io/projected/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-kube-api-access-mh9qc\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.321131 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-utilities\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.321425 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-catalog-content\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.349368 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh9qc\" (UniqueName: \"kubernetes.io/projected/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-kube-api-access-mh9qc\") pod \"redhat-marketplace-7kpkn\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.362677 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:02 crc kubenswrapper[4808]: I0217 16:21:02.894623 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kpkn"] Feb 17 16:21:02 crc kubenswrapper[4808]: W0217 16:21:02.896499 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4e6a34f_a3c5_453d_a8e0_244c279aa68f.slice/crio-58a123d0cf872ccfa1d13d556eeba502e7247af82124e5887cddf5c4618985da WatchSource:0}: Error finding container 58a123d0cf872ccfa1d13d556eeba502e7247af82124e5887cddf5c4618985da: Status 404 returned error can't find the container with id 58a123d0cf872ccfa1d13d556eeba502e7247af82124e5887cddf5c4618985da Feb 17 16:21:03 crc kubenswrapper[4808]: I0217 16:21:03.646718 4808 generic.go:334] "Generic (PLEG): container finished" podID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerID="1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f" exitCode=0 Feb 17 16:21:03 crc kubenswrapper[4808]: I0217 16:21:03.647083 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kpkn" event={"ID":"c4e6a34f-a3c5-453d-a8e0-244c279aa68f","Type":"ContainerDied","Data":"1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f"} Feb 17 16:21:03 crc kubenswrapper[4808]: I0217 16:21:03.647119 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kpkn" event={"ID":"c4e6a34f-a3c5-453d-a8e0-244c279aa68f","Type":"ContainerStarted","Data":"58a123d0cf872ccfa1d13d556eeba502e7247af82124e5887cddf5c4618985da"} Feb 17 16:21:03 crc kubenswrapper[4808]: I0217 16:21:03.650812 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:21:04 crc kubenswrapper[4808]: I0217 16:21:04.660214 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kpkn" event={"ID":"c4e6a34f-a3c5-453d-a8e0-244c279aa68f","Type":"ContainerStarted","Data":"96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a"} Feb 17 16:21:05 crc kubenswrapper[4808]: I0217 16:21:05.681260 4808 generic.go:334] "Generic (PLEG): container finished" podID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerID="96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a" exitCode=0 Feb 17 16:21:05 crc kubenswrapper[4808]: I0217 16:21:05.681374 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kpkn" event={"ID":"c4e6a34f-a3c5-453d-a8e0-244c279aa68f","Type":"ContainerDied","Data":"96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a"} Feb 17 16:21:07 crc kubenswrapper[4808]: I0217 16:21:07.704446 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kpkn" event={"ID":"c4e6a34f-a3c5-453d-a8e0-244c279aa68f","Type":"ContainerStarted","Data":"bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797"} Feb 17 16:21:07 crc kubenswrapper[4808]: I0217 16:21:07.722050 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7kpkn" podStartSLOduration=3.2642923059999998 podStartE2EDuration="6.722031284s" podCreationTimestamp="2026-02-17 16:21:01 +0000 UTC" firstStartedPulling="2026-02-17 16:21:03.648849112 +0000 UTC m=+1627.165208195" lastFinishedPulling="2026-02-17 16:21:07.1065881 +0000 UTC m=+1630.622947173" observedRunningTime="2026-02-17 16:21:07.719492036 +0000 UTC m=+1631.235851129" watchObservedRunningTime="2026-02-17 16:21:07.722031284 +0000 UTC m=+1631.238390357" Feb 17 16:21:08 crc kubenswrapper[4808]: E0217 16:21:08.148420 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:21:09 crc kubenswrapper[4808]: I0217 16:21:09.146499 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:21:09 crc kubenswrapper[4808]: E0217 16:21:09.147060 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:21:11 crc kubenswrapper[4808]: E0217 16:21:11.149285 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:21:12 crc kubenswrapper[4808]: I0217 16:21:12.363209 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:12 crc kubenswrapper[4808]: I0217 16:21:12.363641 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:12 crc kubenswrapper[4808]: I0217 16:21:12.425329 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:12 crc kubenswrapper[4808]: I0217 16:21:12.859993 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:12 crc kubenswrapper[4808]: I0217 16:21:12.915421 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kpkn"] Feb 17 16:21:14 crc kubenswrapper[4808]: I0217 16:21:14.831672 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7kpkn" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="registry-server" containerID="cri-o://bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797" gracePeriod=2 Feb 17 16:21:14 crc kubenswrapper[4808]: I0217 16:21:14.920621 4808 scope.go:117] "RemoveContainer" containerID="256eec0493e7fac44365f09c9ecea2db586554f077823fc95da099751524686d" Feb 17 16:21:14 crc kubenswrapper[4808]: I0217 16:21:14.964943 4808 scope.go:117] "RemoveContainer" containerID="a81fffa1dbaddd4905f2490f1b43e8825142981115e721e7e79501c10a7af652" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.449655 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.513283 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-utilities\") pod \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.513405 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-catalog-content\") pod \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.513511 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh9qc\" (UniqueName: \"kubernetes.io/projected/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-kube-api-access-mh9qc\") pod \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\" (UID: \"c4e6a34f-a3c5-453d-a8e0-244c279aa68f\") " Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.514802 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-utilities" (OuterVolumeSpecName: "utilities") pod "c4e6a34f-a3c5-453d-a8e0-244c279aa68f" (UID: "c4e6a34f-a3c5-453d-a8e0-244c279aa68f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.522868 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-kube-api-access-mh9qc" (OuterVolumeSpecName: "kube-api-access-mh9qc") pod "c4e6a34f-a3c5-453d-a8e0-244c279aa68f" (UID: "c4e6a34f-a3c5-453d-a8e0-244c279aa68f"). InnerVolumeSpecName "kube-api-access-mh9qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.564169 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4e6a34f-a3c5-453d-a8e0-244c279aa68f" (UID: "c4e6a34f-a3c5-453d-a8e0-244c279aa68f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.616786 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.616823 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.616836 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh9qc\" (UniqueName: \"kubernetes.io/projected/c4e6a34f-a3c5-453d-a8e0-244c279aa68f-kube-api-access-mh9qc\") on node \"crc\" DevicePath \"\"" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.849502 4808 generic.go:334] "Generic (PLEG): container finished" podID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerID="bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797" exitCode=0 Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.849545 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kpkn" event={"ID":"c4e6a34f-a3c5-453d-a8e0-244c279aa68f","Type":"ContainerDied","Data":"bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797"} Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.849595 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7kpkn" event={"ID":"c4e6a34f-a3c5-453d-a8e0-244c279aa68f","Type":"ContainerDied","Data":"58a123d0cf872ccfa1d13d556eeba502e7247af82124e5887cddf5c4618985da"} Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.849616 4808 scope.go:117] "RemoveContainer" containerID="bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.849759 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7kpkn" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.884301 4808 scope.go:117] "RemoveContainer" containerID="96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.900804 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kpkn"] Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.913704 4808 scope.go:117] "RemoveContainer" containerID="1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.914698 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7kpkn"] Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.950839 4808 scope.go:117] "RemoveContainer" containerID="bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797" Feb 17 16:21:15 crc kubenswrapper[4808]: E0217 16:21:15.951381 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797\": container with ID starting with bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797 not found: ID does not exist" containerID="bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.951513 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797"} err="failed to get container status \"bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797\": rpc error: code = NotFound desc = could not find container \"bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797\": container with ID starting with bc678ceb9ba35b9d93f987954ff15a382cc01cf598d3e6929ad41e00b1326797 not found: ID does not exist" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.951649 4808 scope.go:117] "RemoveContainer" containerID="96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a" Feb 17 16:21:15 crc kubenswrapper[4808]: E0217 16:21:15.952186 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a\": container with ID starting with 96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a not found: ID does not exist" containerID="96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.952292 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a"} err="failed to get container status \"96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a\": rpc error: code = NotFound desc = could not find container \"96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a\": container with ID starting with 96c0bb98b88359fe533d8206e5e69230b0be81e672510db74ff3204e1943906a not found: ID does not exist" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.952415 4808 scope.go:117] "RemoveContainer" containerID="1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f" Feb 17 16:21:15 crc kubenswrapper[4808]: E0217 16:21:15.952778 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f\": container with ID starting with 1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f not found: ID does not exist" containerID="1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f" Feb 17 16:21:15 crc kubenswrapper[4808]: I0217 16:21:15.952903 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f"} err="failed to get container status \"1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f\": rpc error: code = NotFound desc = could not find container \"1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f\": container with ID starting with 1d210c635ed371a09b67590952111fc432c489ddf228de0a62fc51e181a3886f not found: ID does not exist" Feb 17 16:21:17 crc kubenswrapper[4808]: I0217 16:21:17.158358 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" path="/var/lib/kubelet/pods/c4e6a34f-a3c5-453d-a8e0-244c279aa68f/volumes" Feb 17 16:21:19 crc kubenswrapper[4808]: E0217 16:21:19.149791 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:21:22 crc kubenswrapper[4808]: E0217 16:21:22.273751 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:21:22 crc kubenswrapper[4808]: E0217 16:21:22.274129 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:21:22 crc kubenswrapper[4808]: E0217 16:21:22.274316 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:21:22 crc kubenswrapper[4808]: E0217 16:21:22.275599 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:21:23 crc kubenswrapper[4808]: I0217 16:21:23.146202 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:21:23 crc kubenswrapper[4808]: E0217 16:21:23.146855 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:21:32 crc kubenswrapper[4808]: E0217 16:21:32.148126 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:21:35 crc kubenswrapper[4808]: E0217 16:21:35.148047 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:21:38 crc kubenswrapper[4808]: I0217 16:21:38.146056 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:21:38 crc kubenswrapper[4808]: E0217 16:21:38.146792 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:21:46 crc kubenswrapper[4808]: E0217 16:21:46.149196 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:21:46 crc kubenswrapper[4808]: E0217 16:21:46.270154 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:21:46 crc kubenswrapper[4808]: E0217 16:21:46.270439 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:21:46 crc kubenswrapper[4808]: E0217 16:21:46.270824 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:21:46 crc kubenswrapper[4808]: E0217 16:21:46.272251 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:21:51 crc kubenswrapper[4808]: I0217 16:21:51.146200 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:21:51 crc kubenswrapper[4808]: E0217 16:21:51.147640 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:22:01 crc kubenswrapper[4808]: E0217 16:22:01.150732 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:22:01 crc kubenswrapper[4808]: E0217 16:22:01.150769 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:22:02 crc kubenswrapper[4808]: I0217 16:22:02.145892 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:22:02 crc kubenswrapper[4808]: E0217 16:22:02.146258 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:22:14 crc kubenswrapper[4808]: E0217 16:22:14.149431 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:22:14 crc kubenswrapper[4808]: E0217 16:22:14.149472 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:22:16 crc kubenswrapper[4808]: I0217 16:22:16.146975 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:22:16 crc kubenswrapper[4808]: E0217 16:22:16.147424 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:22:25 crc kubenswrapper[4808]: E0217 16:22:25.148831 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:22:28 crc kubenswrapper[4808]: I0217 16:22:28.147442 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:22:28 crc kubenswrapper[4808]: E0217 16:22:28.148016 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:22:28 crc kubenswrapper[4808]: E0217 16:22:28.148230 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:22:38 crc kubenswrapper[4808]: E0217 16:22:38.149055 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:22:40 crc kubenswrapper[4808]: I0217 16:22:40.146460 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:22:40 crc kubenswrapper[4808]: E0217 16:22:40.146858 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:22:43 crc kubenswrapper[4808]: E0217 16:22:43.149546 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:22:51 crc kubenswrapper[4808]: I0217 16:22:51.145696 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:22:51 crc kubenswrapper[4808]: E0217 16:22:51.146549 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:22:53 crc kubenswrapper[4808]: E0217 16:22:53.149687 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:22:56 crc kubenswrapper[4808]: I0217 16:22:56.020807 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" event={"ID":"e4a30af7-342e-49c0-8e89-c38f11b7cc63","Type":"ContainerDied","Data":"71c91d6451b64c7f7e3bd20b7f8ce8d6da0a6dbf093d38be3cac5d1529528868"} Feb 17 16:22:56 crc kubenswrapper[4808]: I0217 16:22:56.020961 4808 generic.go:334] "Generic (PLEG): container finished" podID="e4a30af7-342e-49c0-8e89-c38f11b7cc63" containerID="71c91d6451b64c7f7e3bd20b7f8ce8d6da0a6dbf093d38be3cac5d1529528868" exitCode=0 Feb 17 16:22:57 crc kubenswrapper[4808]: E0217 16:22:57.157297 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.541198 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.655465 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dsr2\" (UniqueName: \"kubernetes.io/projected/e4a30af7-342e-49c0-8e89-c38f11b7cc63-kube-api-access-9dsr2\") pod \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.655528 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-ssh-key-openstack-edpm-ipam\") pod \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.655681 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-inventory\") pod \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.655723 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-bootstrap-combined-ca-bundle\") pod \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\" (UID: \"e4a30af7-342e-49c0-8e89-c38f11b7cc63\") " Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.660669 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a30af7-342e-49c0-8e89-c38f11b7cc63-kube-api-access-9dsr2" (OuterVolumeSpecName: "kube-api-access-9dsr2") pod "e4a30af7-342e-49c0-8e89-c38f11b7cc63" (UID: "e4a30af7-342e-49c0-8e89-c38f11b7cc63"). InnerVolumeSpecName "kube-api-access-9dsr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.663676 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e4a30af7-342e-49c0-8e89-c38f11b7cc63" (UID: "e4a30af7-342e-49c0-8e89-c38f11b7cc63"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.684322 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e4a30af7-342e-49c0-8e89-c38f11b7cc63" (UID: "e4a30af7-342e-49c0-8e89-c38f11b7cc63"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.689029 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-inventory" (OuterVolumeSpecName: "inventory") pod "e4a30af7-342e-49c0-8e89-c38f11b7cc63" (UID: "e4a30af7-342e-49c0-8e89-c38f11b7cc63"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.757547 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dsr2\" (UniqueName: \"kubernetes.io/projected/e4a30af7-342e-49c0-8e89-c38f11b7cc63-kube-api-access-9dsr2\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.757598 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.757610 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:57 crc kubenswrapper[4808]: I0217 16:22:57.757621 4808 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4a30af7-342e-49c0-8e89-c38f11b7cc63-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.046944 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" event={"ID":"e4a30af7-342e-49c0-8e89-c38f11b7cc63","Type":"ContainerDied","Data":"ec611864a405eeef1eea8b1792d33b647fe4a37506f5f9ad7454e52f00a3b863"} Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.046983 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec611864a405eeef1eea8b1792d33b647fe4a37506f5f9ad7454e52f00a3b863" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.047012 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.178793 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt"] Feb 17 16:22:58 crc kubenswrapper[4808]: E0217 16:22:58.179202 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a30af7-342e-49c0-8e89-c38f11b7cc63" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.179215 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a30af7-342e-49c0-8e89-c38f11b7cc63" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:22:58 crc kubenswrapper[4808]: E0217 16:22:58.179238 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="registry-server" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.179245 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="registry-server" Feb 17 16:22:58 crc kubenswrapper[4808]: E0217 16:22:58.179265 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="extract-content" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.179271 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="extract-content" Feb 17 16:22:58 crc kubenswrapper[4808]: E0217 16:22:58.179287 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="extract-utilities" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.179294 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="extract-utilities" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.179466 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a30af7-342e-49c0-8e89-c38f11b7cc63" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.179480 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e6a34f-a3c5-453d-a8e0-244c279aa68f" containerName="registry-server" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.180152 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.183397 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.183679 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.183843 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.183952 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.190536 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt"] Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.265872 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdfxv\" (UniqueName: \"kubernetes.io/projected/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-kube-api-access-kdfxv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.266313 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.266503 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.368189 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.368278 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.368328 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdfxv\" (UniqueName: \"kubernetes.io/projected/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-kube-api-access-kdfxv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.374135 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.386334 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.387169 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdfxv\" (UniqueName: \"kubernetes.io/projected/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-kube-api-access-kdfxv\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-sjckt\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:58 crc kubenswrapper[4808]: I0217 16:22:58.536870 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:22:59 crc kubenswrapper[4808]: I0217 16:22:59.132121 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt"] Feb 17 16:23:00 crc kubenswrapper[4808]: I0217 16:23:00.074873 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" event={"ID":"2084629b-ffd4-4f5e-8db7-070d4a08dd8e","Type":"ContainerStarted","Data":"92e6ef387cf41dd71a851ea483493cf05b8666e2889e1132cbfb6ad483176127"} Feb 17 16:23:00 crc kubenswrapper[4808]: I0217 16:23:00.075395 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" event={"ID":"2084629b-ffd4-4f5e-8db7-070d4a08dd8e","Type":"ContainerStarted","Data":"b7f31d0387d770241189aacd0771c827ab5a7b271e4e7dcc1efa78c199758ae8"} Feb 17 16:23:00 crc kubenswrapper[4808]: I0217 16:23:00.099792 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" podStartSLOduration=1.5139489990000001 podStartE2EDuration="2.099762739s" podCreationTimestamp="2026-02-17 16:22:58 +0000 UTC" firstStartedPulling="2026-02-17 16:22:59.124341464 +0000 UTC m=+1742.640700547" lastFinishedPulling="2026-02-17 16:22:59.710155204 +0000 UTC m=+1743.226514287" observedRunningTime="2026-02-17 16:23:00.089954983 +0000 UTC m=+1743.606314106" watchObservedRunningTime="2026-02-17 16:23:00.099762739 +0000 UTC m=+1743.616121892" Feb 17 16:23:03 crc kubenswrapper[4808]: I0217 16:23:03.150743 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:23:03 crc kubenswrapper[4808]: E0217 16:23:03.157660 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:23:07 crc kubenswrapper[4808]: E0217 16:23:07.158425 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:23:11 crc kubenswrapper[4808]: E0217 16:23:11.149743 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:23:15 crc kubenswrapper[4808]: I0217 16:23:15.155516 4808 scope.go:117] "RemoveContainer" containerID="8bfe96313fc0880ba2b05de73386c3a0141557df7597d80f4ca352d193fcea90" Feb 17 16:23:15 crc kubenswrapper[4808]: I0217 16:23:15.193842 4808 scope.go:117] "RemoveContainer" containerID="8ef043aeb841feb7820cafa9458135b261212780ed4c47c6422beb21b665b0f8" Feb 17 16:23:15 crc kubenswrapper[4808]: I0217 16:23:15.232863 4808 scope.go:117] "RemoveContainer" containerID="b2074f66b52d0ee5fc07e0dd48e5b9610e713f89e070fa2279a74046e30629e5" Feb 17 16:23:15 crc kubenswrapper[4808]: I0217 16:23:15.265947 4808 scope.go:117] "RemoveContainer" containerID="8a9460318021d21a8c095dc46b0f6d2b923e1d1fb20312230919800b64c327bf" Feb 17 16:23:15 crc kubenswrapper[4808]: I0217 16:23:15.303048 4808 scope.go:117] "RemoveContainer" containerID="d73ac62ad3bfcdefb51a665f43bfa062a8308099aae6c2d45cb612f3752adbbe" Feb 17 16:23:15 crc kubenswrapper[4808]: I0217 16:23:15.340205 4808 scope.go:117] "RemoveContainer" containerID="14e92a83abc11738c2e58494b921f0dba3aa3b66f55a3affc10d2417c6785a90" Feb 17 16:23:18 crc kubenswrapper[4808]: I0217 16:23:18.145875 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:23:18 crc kubenswrapper[4808]: E0217 16:23:18.146793 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:23:22 crc kubenswrapper[4808]: E0217 16:23:22.149656 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:23:22 crc kubenswrapper[4808]: E0217 16:23:22.149656 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:23:30 crc kubenswrapper[4808]: I0217 16:23:30.146927 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:23:30 crc kubenswrapper[4808]: E0217 16:23:30.149899 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:23:33 crc kubenswrapper[4808]: E0217 16:23:33.149313 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:23:37 crc kubenswrapper[4808]: E0217 16:23:37.174701 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.063312 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6mgt5"] Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.079998 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6mgt5"] Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.097554 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1c2d-account-create-update-5rmst"] Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.112083 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-mp9g8"] Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.124332 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1e92-account-create-update-s8tnj"] Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.138339 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-mp9g8"] Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.174318 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56341195-0325-4b22-ba76-8f792fbbcdb6" path="/var/lib/kubelet/pods/56341195-0325-4b22-ba76-8f792fbbcdb6/volumes" Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.176774 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7419b027-2686-4ba4-9459-30a4362d34f0" path="/var/lib/kubelet/pods/7419b027-2686-4ba4-9459-30a4362d34f0/volumes" Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.179020 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1c2d-account-create-update-5rmst"] Feb 17 16:23:41 crc kubenswrapper[4808]: I0217 16:23:41.179072 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1e92-account-create-update-s8tnj"] Feb 17 16:23:42 crc kubenswrapper[4808]: I0217 16:23:42.054122 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-cw2fg"] Feb 17 16:23:42 crc kubenswrapper[4808]: I0217 16:23:42.067605 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-cw2fg"] Feb 17 16:23:42 crc kubenswrapper[4808]: I0217 16:23:42.076671 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6fc9-account-create-update-hsl6c"] Feb 17 16:23:42 crc kubenswrapper[4808]: I0217 16:23:42.084943 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6fc9-account-create-update-hsl6c"] Feb 17 16:23:42 crc kubenswrapper[4808]: I0217 16:23:42.145318 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:23:42 crc kubenswrapper[4808]: E0217 16:23:42.145720 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:23:43 crc kubenswrapper[4808]: I0217 16:23:43.167534 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58e700c8-ab25-47a2-a6cf-e85ffcb57e74" path="/var/lib/kubelet/pods/58e700c8-ab25-47a2-a6cf-e85ffcb57e74/volumes" Feb 17 16:23:43 crc kubenswrapper[4808]: I0217 16:23:43.169542 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="850baae5-89be-441f-85e0-f2f0ec68bdc3" path="/var/lib/kubelet/pods/850baae5-89be-441f-85e0-f2f0ec68bdc3/volumes" Feb 17 16:23:43 crc kubenswrapper[4808]: I0217 16:23:43.171678 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="850d66dd-e985-408b-93a0-8251cfd8dbc5" path="/var/lib/kubelet/pods/850d66dd-e985-408b-93a0-8251cfd8dbc5/volumes" Feb 17 16:23:43 crc kubenswrapper[4808]: I0217 16:23:43.172895 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbacbd93-bbc0-4360-bc45-9782988bd3c0" path="/var/lib/kubelet/pods/dbacbd93-bbc0-4360-bc45-9782988bd3c0/volumes" Feb 17 16:23:45 crc kubenswrapper[4808]: E0217 16:23:45.147729 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:23:51 crc kubenswrapper[4808]: E0217 16:23:51.148398 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:23:56 crc kubenswrapper[4808]: I0217 16:23:56.146413 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:23:56 crc kubenswrapper[4808]: E0217 16:23:56.147602 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:24:00 crc kubenswrapper[4808]: E0217 16:24:00.147598 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:24:06 crc kubenswrapper[4808]: E0217 16:24:06.148941 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:24:07 crc kubenswrapper[4808]: I0217 16:24:07.050942 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-f2jqv"] Feb 17 16:24:07 crc kubenswrapper[4808]: I0217 16:24:07.066288 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-f2jqv"] Feb 17 16:24:07 crc kubenswrapper[4808]: I0217 16:24:07.163394 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7377369f-b540-4b85-be05-4200c9695a41" path="/var/lib/kubelet/pods/7377369f-b540-4b85-be05-4200c9695a41/volumes" Feb 17 16:24:09 crc kubenswrapper[4808]: I0217 16:24:09.146197 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:24:09 crc kubenswrapper[4808]: E0217 16:24:09.146840 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.065755 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-78cc-account-create-update-k7vgl"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.093128 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ktddg"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.105964 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jqrq2"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.115279 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8c80-account-create-update-rk4jj"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.125222 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-r5lfk"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.137219 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-jmq6n"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.167590 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59d8-account-create-update-5vsvx"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.167833 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-a9c6-account-create-update-48vv8"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.177892 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59d8-account-create-update-5vsvx"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.196078 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jqrq2"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.208122 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ktddg"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.217117 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-a9c6-account-create-update-48vv8"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.226733 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8c80-account-create-update-rk4jj"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.240976 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-78cc-account-create-update-k7vgl"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.249204 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-jmq6n"] Feb 17 16:24:11 crc kubenswrapper[4808]: I0217 16:24:11.256899 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-r5lfk"] Feb 17 16:24:12 crc kubenswrapper[4808]: E0217 16:24:12.282910 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:24:12 crc kubenswrapper[4808]: E0217 16:24:12.283512 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:24:12 crc kubenswrapper[4808]: E0217 16:24:12.283830 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:24:12 crc kubenswrapper[4808]: E0217 16:24:12.285227 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.163227 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02478fdd-380d-42f9-b105-c3ae86d224a8" path="/var/lib/kubelet/pods/02478fdd-380d-42f9-b105-c3ae86d224a8/volumes" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.164827 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2495c4d6-8174-4b4d-9114-968620fbba31" path="/var/lib/kubelet/pods/2495c4d6-8174-4b4d-9114-968620fbba31/volumes" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.165995 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ccecd7d-0e59-4336-a6ec-a595adbb727e" path="/var/lib/kubelet/pods/3ccecd7d-0e59-4336-a6ec-a595adbb727e/volumes" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.167095 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e328d4-94e9-42bc-ae1c-b07b01d80072" path="/var/lib/kubelet/pods/72e328d4-94e9-42bc-ae1c-b07b01d80072/volumes" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.169426 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c02cbd83-d077-4812-b852-7fe9a0182b71" path="/var/lib/kubelet/pods/c02cbd83-d077-4812-b852-7fe9a0182b71/volumes" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.171026 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e183e901-16a0-43cf-9ce5-ef36da8686d1" path="/var/lib/kubelet/pods/e183e901-16a0-43cf-9ce5-ef36da8686d1/volumes" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.172713 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5180ea6-12c0-4463-8fe5-c35ab2a15b44" path="/var/lib/kubelet/pods/e5180ea6-12c0-4463-8fe5-c35ab2a15b44/volumes" Feb 17 16:24:13 crc kubenswrapper[4808]: I0217 16:24:13.174953 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff670244-5344-4409-9823-6bfcf9ed274d" path="/var/lib/kubelet/pods/ff670244-5344-4409-9823-6bfcf9ed274d/volumes" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.446668 4808 scope.go:117] "RemoveContainer" containerID="2e2ee0ccc758be665530168176318d177d82ba65213912cccc942306aee57326" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.502222 4808 scope.go:117] "RemoveContainer" containerID="468b053d64c80baec6de3b54c4b2f477a89ae15f7b2f83e72b93e7a2a09b7e47" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.585859 4808 scope.go:117] "RemoveContainer" containerID="77cbcade43f0ae77b54c73845bcb62b81d16918f6513db83061d64f348ec9b2b" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.620142 4808 scope.go:117] "RemoveContainer" containerID="20f7389fa9f51fba5453c2a234db420d7d9f90654863c47b866a9ae0d75fd9b5" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.675220 4808 scope.go:117] "RemoveContainer" containerID="f07d48d83b8d167312f75dfe2e3617926d4c7c6a17b68b60f025f9a0615ec6aa" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.718271 4808 scope.go:117] "RemoveContainer" containerID="8bbf45c20da63316a7d1a31fef41a55e4272d4200c5d0a86c7aa340258751589" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.769674 4808 scope.go:117] "RemoveContainer" containerID="b727a664b9c0061ba9f01801dd0228679fbc0026b1e712729a3b0f80c6eddfb3" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.795759 4808 scope.go:117] "RemoveContainer" containerID="2318a25c8a4fd490438531d7eb31b39589b2387c36e3e5db64b5abeb8c178d66" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.821847 4808 scope.go:117] "RemoveContainer" containerID="d6c0e57ec0c9fe5da75d2c778f8867455af3d9bb73146a28181bca20e679417d" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.842615 4808 scope.go:117] "RemoveContainer" containerID="b9a6e75c4872c463e0bee7ea278256a76575233d65a1cb8980723a4259e57365" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.865896 4808 scope.go:117] "RemoveContainer" containerID="c6b61ad973a4d676df7b94d7816cb334b0acc481ec5fdce3038641a24a062cf0" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.888039 4808 scope.go:117] "RemoveContainer" containerID="92a52a548321e7e91228a92677db66adc649f3fd4be4a1f0b2dcb81c8ce95063" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.912883 4808 scope.go:117] "RemoveContainer" containerID="313ac15ae60a5d599f6768b0198df4cac62283c718fe3fa77e1a4a039f74c3b9" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.935874 4808 scope.go:117] "RemoveContainer" containerID="56b80ac7ee378fc8d9b7164abf8b6f6b4c7155149d6206a5a9c6aa08286e5594" Feb 17 16:24:15 crc kubenswrapper[4808]: I0217 16:24:15.955152 4808 scope.go:117] "RemoveContainer" containerID="ebb5009c36b8fd7590317bf3c492f0defedfa61fc35e3d839e79e88a3e507747" Feb 17 16:24:18 crc kubenswrapper[4808]: I0217 16:24:18.039110 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-kzjns"] Feb 17 16:24:18 crc kubenswrapper[4808]: I0217 16:24:18.053450 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-kzjns"] Feb 17 16:24:19 crc kubenswrapper[4808]: E0217 16:24:19.149909 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:24:19 crc kubenswrapper[4808]: I0217 16:24:19.166378 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c68bd6-6280-4a89-be87-4d65f06a5a4d" path="/var/lib/kubelet/pods/41c68bd6-6280-4a89-be87-4d65f06a5a4d/volumes" Feb 17 16:24:23 crc kubenswrapper[4808]: I0217 16:24:23.146272 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:24:23 crc kubenswrapper[4808]: E0217 16:24:23.148754 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:24:27 crc kubenswrapper[4808]: E0217 16:24:27.161773 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:24:34 crc kubenswrapper[4808]: E0217 16:24:34.283665 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:24:34 crc kubenswrapper[4808]: E0217 16:24:34.284370 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:24:34 crc kubenswrapper[4808]: E0217 16:24:34.284565 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:24:34 crc kubenswrapper[4808]: E0217 16:24:34.286011 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:24:37 crc kubenswrapper[4808]: I0217 16:24:37.158142 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:24:37 crc kubenswrapper[4808]: E0217 16:24:37.159152 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:24:40 crc kubenswrapper[4808]: E0217 16:24:40.150287 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:24:45 crc kubenswrapper[4808]: I0217 16:24:45.065142 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-4mdzt"] Feb 17 16:24:45 crc kubenswrapper[4808]: I0217 16:24:45.082657 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-jskwv"] Feb 17 16:24:45 crc kubenswrapper[4808]: I0217 16:24:45.092421 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-4mdzt"] Feb 17 16:24:45 crc kubenswrapper[4808]: I0217 16:24:45.104122 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-jskwv"] Feb 17 16:24:45 crc kubenswrapper[4808]: E0217 16:24:45.149296 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:24:45 crc kubenswrapper[4808]: I0217 16:24:45.169118 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="436b0400-6c82-450b-9505-61bf124b5db5" path="/var/lib/kubelet/pods/436b0400-6c82-450b-9505-61bf124b5db5/volumes" Feb 17 16:24:45 crc kubenswrapper[4808]: I0217 16:24:45.170324 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4002815-8dd4-4668-bea7-0d54bdaa4dd6" path="/var/lib/kubelet/pods/e4002815-8dd4-4668-bea7-0d54bdaa4dd6/volumes" Feb 17 16:24:50 crc kubenswrapper[4808]: I0217 16:24:50.146063 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:24:50 crc kubenswrapper[4808]: E0217 16:24:50.146678 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:24:51 crc kubenswrapper[4808]: E0217 16:24:51.147809 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:24:59 crc kubenswrapper[4808]: E0217 16:24:59.149212 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:25:04 crc kubenswrapper[4808]: E0217 16:25:04.148814 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:25:05 crc kubenswrapper[4808]: I0217 16:25:05.146768 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:25:05 crc kubenswrapper[4808]: I0217 16:25:05.675215 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"ba9082db1029d7bfb949c1e61cae44b0ec31ca6cae55a6942a3dbac04ecadf0f"} Feb 17 16:25:06 crc kubenswrapper[4808]: I0217 16:25:06.029426 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-67f4b"] Feb 17 16:25:06 crc kubenswrapper[4808]: I0217 16:25:06.038188 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-67f4b"] Feb 17 16:25:07 crc kubenswrapper[4808]: I0217 16:25:07.177854 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb977bed-804c-4e4c-8d35-5562015024f3" path="/var/lib/kubelet/pods/bb977bed-804c-4e4c-8d35-5562015024f3/volumes" Feb 17 16:25:08 crc kubenswrapper[4808]: I0217 16:25:08.056576 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-d52vg"] Feb 17 16:25:08 crc kubenswrapper[4808]: I0217 16:25:08.068057 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-d52vg"] Feb 17 16:25:09 crc kubenswrapper[4808]: I0217 16:25:09.168891 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7820c3c-fe38-46dd-906a-498a579d0805" path="/var/lib/kubelet/pods/b7820c3c-fe38-46dd-906a-498a579d0805/volumes" Feb 17 16:25:13 crc kubenswrapper[4808]: E0217 16:25:13.149729 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:25:14 crc kubenswrapper[4808]: I0217 16:25:14.038365 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-rwld8"] Feb 17 16:25:14 crc kubenswrapper[4808]: I0217 16:25:14.049939 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-rwld8"] Feb 17 16:25:15 crc kubenswrapper[4808]: I0217 16:25:15.039165 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-jcqjf"] Feb 17 16:25:15 crc kubenswrapper[4808]: I0217 16:25:15.071757 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-jcqjf"] Feb 17 16:25:15 crc kubenswrapper[4808]: I0217 16:25:15.166631 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bf4d932-664a-46c6-bec5-f2b70950c824" path="/var/lib/kubelet/pods/5bf4d932-664a-46c6-bec5-f2b70950c824/volumes" Feb 17 16:25:15 crc kubenswrapper[4808]: I0217 16:25:15.167355 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0cc3be3-7aa7-4384-97ed-1ec7bf75f026" path="/var/lib/kubelet/pods/d0cc3be3-7aa7-4384-97ed-1ec7bf75f026/volumes" Feb 17 16:25:16 crc kubenswrapper[4808]: I0217 16:25:16.286633 4808 scope.go:117] "RemoveContainer" containerID="be39fd3404d415b22eff1029ee90e816412441ea7651c949f01bcda15108e232" Feb 17 16:25:16 crc kubenswrapper[4808]: I0217 16:25:16.337412 4808 scope.go:117] "RemoveContainer" containerID="f8847c4c332a78fa4f9cfb197b1e182c16bad161468b9956b43f0c638512254c" Feb 17 16:25:16 crc kubenswrapper[4808]: I0217 16:25:16.416051 4808 scope.go:117] "RemoveContainer" containerID="d13306e7f7b98912b9cc3cb00da949b55a527efdf00a13d4c28a802941f6067a" Feb 17 16:25:16 crc kubenswrapper[4808]: I0217 16:25:16.461815 4808 scope.go:117] "RemoveContainer" containerID="f426da7c0095388c504bdd496cb29b45871594e3a52a02106d296d950a35b8b0" Feb 17 16:25:16 crc kubenswrapper[4808]: I0217 16:25:16.533519 4808 scope.go:117] "RemoveContainer" containerID="605854da0374a1e089d7a0c7ad0840ab1318edc5017bc1e2125f207c2fb40b06" Feb 17 16:25:16 crc kubenswrapper[4808]: I0217 16:25:16.576273 4808 scope.go:117] "RemoveContainer" containerID="8d303380763eeeb183dbe5ad17a24b48fb7b4e5af84df78d3904d5c4c2cf91f7" Feb 17 16:25:16 crc kubenswrapper[4808]: I0217 16:25:16.613996 4808 scope.go:117] "RemoveContainer" containerID="1cff9cf3eadd10df7be967e33cf8e5d78b57505ed6a912803f00cfd78dd0e31c" Feb 17 16:25:18 crc kubenswrapper[4808]: E0217 16:25:18.147560 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:25:24 crc kubenswrapper[4808]: E0217 16:25:24.149162 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:25:28 crc kubenswrapper[4808]: I0217 16:25:28.048588 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-cftjl"] Feb 17 16:25:28 crc kubenswrapper[4808]: I0217 16:25:28.065511 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-cftjl"] Feb 17 16:25:29 crc kubenswrapper[4808]: I0217 16:25:29.173772 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf7344d6-b8f4-4234-bb75-f4d7702b040b" path="/var/lib/kubelet/pods/cf7344d6-b8f4-4234-bb75-f4d7702b040b/volumes" Feb 17 16:25:30 crc kubenswrapper[4808]: E0217 16:25:30.148857 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:25:36 crc kubenswrapper[4808]: E0217 16:25:36.149100 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:25:43 crc kubenswrapper[4808]: E0217 16:25:43.154339 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:25:51 crc kubenswrapper[4808]: E0217 16:25:51.148946 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:25:54 crc kubenswrapper[4808]: E0217 16:25:54.149427 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.197624 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tlq8w"] Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.201473 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.217187 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlq8w"] Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.394887 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-utilities\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.394989 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rbsv\" (UniqueName: \"kubernetes.io/projected/071adac8-52ce-4703-a685-252d450e9c18-kube-api-access-8rbsv\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.395026 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-catalog-content\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.496901 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rbsv\" (UniqueName: \"kubernetes.io/projected/071adac8-52ce-4703-a685-252d450e9c18-kube-api-access-8rbsv\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.497205 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-catalog-content\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.497355 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-utilities\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.497696 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-catalog-content\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.497844 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-utilities\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.527904 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rbsv\" (UniqueName: \"kubernetes.io/projected/071adac8-52ce-4703-a685-252d450e9c18-kube-api-access-8rbsv\") pod \"certified-operators-tlq8w\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:55 crc kubenswrapper[4808]: I0217 16:25:55.585938 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.048566 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-7e6f-account-create-update-zcm7d"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.064853 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-490b-account-create-update-7wjkg"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.073170 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0369-account-create-update-hd6gb"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.085704 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-drbdx"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.093448 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-bmg4x"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.101834 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-7e6f-account-create-update-zcm7d"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.111111 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-490b-account-create-update-7wjkg"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.123302 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0369-account-create-update-hd6gb"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.133018 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-drbdx"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.160079 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-bmg4x"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.183252 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlq8w"] Feb 17 16:25:56 crc kubenswrapper[4808]: I0217 16:25:56.308236 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlq8w" event={"ID":"071adac8-52ce-4703-a685-252d450e9c18","Type":"ContainerStarted","Data":"3afeb434bab0fb0da9b13eaddc7fb873f72a9bd6ff9080844b251c54195e62ad"} Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.032671 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-tmj75"] Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.045230 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-tmj75"] Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.165921 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="785bc852-9af8-4d44-9c07-a7b501efb72c" path="/var/lib/kubelet/pods/785bc852-9af8-4d44-9c07-a7b501efb72c/volumes" Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.166492 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84bc7003-1a29-41b6-af75-956706dd0efe" path="/var/lib/kubelet/pods/84bc7003-1a29-41b6-af75-956706dd0efe/volumes" Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.167033 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb98158-8a64-4a24-9d8a-5c7308881c79" path="/var/lib/kubelet/pods/adb98158-8a64-4a24-9d8a-5c7308881c79/volumes" Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.168668 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6543f3f-c70d-4258-b1f3-b74458b60153" path="/var/lib/kubelet/pods/b6543f3f-c70d-4258-b1f3-b74458b60153/volumes" Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.169748 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bad0fdf2-2880-4568-87b0-6319f864c348" path="/var/lib/kubelet/pods/bad0fdf2-2880-4568-87b0-6319f864c348/volumes" Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.170259 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6cd1abe-7b23-494f-b22f-b355f5937f82" path="/var/lib/kubelet/pods/c6cd1abe-7b23-494f-b22f-b355f5937f82/volumes" Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.317147 4808 generic.go:334] "Generic (PLEG): container finished" podID="071adac8-52ce-4703-a685-252d450e9c18" containerID="de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0" exitCode=0 Feb 17 16:25:57 crc kubenswrapper[4808]: I0217 16:25:57.317216 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlq8w" event={"ID":"071adac8-52ce-4703-a685-252d450e9c18","Type":"ContainerDied","Data":"de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0"} Feb 17 16:25:58 crc kubenswrapper[4808]: I0217 16:25:58.326503 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlq8w" event={"ID":"071adac8-52ce-4703-a685-252d450e9c18","Type":"ContainerStarted","Data":"249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2"} Feb 17 16:26:02 crc kubenswrapper[4808]: I0217 16:26:02.383083 4808 generic.go:334] "Generic (PLEG): container finished" podID="071adac8-52ce-4703-a685-252d450e9c18" containerID="249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2" exitCode=0 Feb 17 16:26:02 crc kubenswrapper[4808]: I0217 16:26:02.383157 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlq8w" event={"ID":"071adac8-52ce-4703-a685-252d450e9c18","Type":"ContainerDied","Data":"249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2"} Feb 17 16:26:03 crc kubenswrapper[4808]: E0217 16:26:03.146860 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:26:03 crc kubenswrapper[4808]: I0217 16:26:03.395094 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlq8w" event={"ID":"071adac8-52ce-4703-a685-252d450e9c18","Type":"ContainerStarted","Data":"c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d"} Feb 17 16:26:03 crc kubenswrapper[4808]: I0217 16:26:03.429951 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tlq8w" podStartSLOduration=2.983338063 podStartE2EDuration="8.429921445s" podCreationTimestamp="2026-02-17 16:25:55 +0000 UTC" firstStartedPulling="2026-02-17 16:25:57.319717206 +0000 UTC m=+1920.836076279" lastFinishedPulling="2026-02-17 16:26:02.766300578 +0000 UTC m=+1926.282659661" observedRunningTime="2026-02-17 16:26:03.415355585 +0000 UTC m=+1926.931714698" watchObservedRunningTime="2026-02-17 16:26:03.429921445 +0000 UTC m=+1926.946280548" Feb 17 16:26:05 crc kubenswrapper[4808]: I0217 16:26:05.587057 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:26:05 crc kubenswrapper[4808]: I0217 16:26:05.588650 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:26:05 crc kubenswrapper[4808]: I0217 16:26:05.648523 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:26:08 crc kubenswrapper[4808]: E0217 16:26:08.149010 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:26:15 crc kubenswrapper[4808]: E0217 16:26:15.150171 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:26:15 crc kubenswrapper[4808]: I0217 16:26:15.672837 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:26:15 crc kubenswrapper[4808]: I0217 16:26:15.745185 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlq8w"] Feb 17 16:26:16 crc kubenswrapper[4808]: I0217 16:26:16.536853 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tlq8w" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="registry-server" containerID="cri-o://c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d" gracePeriod=2 Feb 17 16:26:16 crc kubenswrapper[4808]: I0217 16:26:16.807968 4808 scope.go:117] "RemoveContainer" containerID="202121dae9bdf398a0c42e540c49f3bde76321b020f7cab3e7250c352d974480" Feb 17 16:26:16 crc kubenswrapper[4808]: I0217 16:26:16.877491 4808 scope.go:117] "RemoveContainer" containerID="8a03cfda6ba1482551fb43a88bb0d456e3e357369b1e584649fa69312e5fe7ab" Feb 17 16:26:16 crc kubenswrapper[4808]: I0217 16:26:16.915693 4808 scope.go:117] "RemoveContainer" containerID="51791c7cf2f261447e50c08d9d3c4f313629f6102c4610a772dc3de95d2aa336" Feb 17 16:26:16 crc kubenswrapper[4808]: I0217 16:26:16.951708 4808 scope.go:117] "RemoveContainer" containerID="24b6cca39f7f0539540e703e695312278dead1c9fbed89b92d1978c2b31592d9" Feb 17 16:26:16 crc kubenswrapper[4808]: I0217 16:26:16.997448 4808 scope.go:117] "RemoveContainer" containerID="0c5f393313c4812ace12e3dfcc1699bc58edf0ad3bd0769e445698189b780158" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.061164 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.080194 4808 scope.go:117] "RemoveContainer" containerID="75d3a237cde61df2195413fb2a62d4c02235666e74a55328045b62f08820fc28" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.123197 4808 scope.go:117] "RemoveContainer" containerID="4239c263afa33d8fe9b5e50780a3b457b698315d00933f6d44bd070b105665ca" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.239837 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-catalog-content\") pod \"071adac8-52ce-4703-a685-252d450e9c18\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.239982 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-utilities\") pod \"071adac8-52ce-4703-a685-252d450e9c18\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.240137 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rbsv\" (UniqueName: \"kubernetes.io/projected/071adac8-52ce-4703-a685-252d450e9c18-kube-api-access-8rbsv\") pod \"071adac8-52ce-4703-a685-252d450e9c18\" (UID: \"071adac8-52ce-4703-a685-252d450e9c18\") " Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.241063 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-utilities" (OuterVolumeSpecName: "utilities") pod "071adac8-52ce-4703-a685-252d450e9c18" (UID: "071adac8-52ce-4703-a685-252d450e9c18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.245131 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/071adac8-52ce-4703-a685-252d450e9c18-kube-api-access-8rbsv" (OuterVolumeSpecName: "kube-api-access-8rbsv") pod "071adac8-52ce-4703-a685-252d450e9c18" (UID: "071adac8-52ce-4703-a685-252d450e9c18"). InnerVolumeSpecName "kube-api-access-8rbsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.289977 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "071adac8-52ce-4703-a685-252d450e9c18" (UID: "071adac8-52ce-4703-a685-252d450e9c18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.342497 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.342530 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rbsv\" (UniqueName: \"kubernetes.io/projected/071adac8-52ce-4703-a685-252d450e9c18-kube-api-access-8rbsv\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.342544 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071adac8-52ce-4703-a685-252d450e9c18-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.553318 4808 generic.go:334] "Generic (PLEG): container finished" podID="071adac8-52ce-4703-a685-252d450e9c18" containerID="c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d" exitCode=0 Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.553491 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlq8w" event={"ID":"071adac8-52ce-4703-a685-252d450e9c18","Type":"ContainerDied","Data":"c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d"} Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.555124 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlq8w" event={"ID":"071adac8-52ce-4703-a685-252d450e9c18","Type":"ContainerDied","Data":"3afeb434bab0fb0da9b13eaddc7fb873f72a9bd6ff9080844b251c54195e62ad"} Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.553664 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlq8w" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.555164 4808 scope.go:117] "RemoveContainer" containerID="c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.589261 4808 scope.go:117] "RemoveContainer" containerID="249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.613629 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlq8w"] Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.617644 4808 scope.go:117] "RemoveContainer" containerID="de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.624731 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tlq8w"] Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.675383 4808 scope.go:117] "RemoveContainer" containerID="c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d" Feb 17 16:26:17 crc kubenswrapper[4808]: E0217 16:26:17.690766 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d\": container with ID starting with c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d not found: ID does not exist" containerID="c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.691076 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d"} err="failed to get container status \"c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d\": rpc error: code = NotFound desc = could not find container \"c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d\": container with ID starting with c191af7df43171393a8f2dcb17ff0940237db92a45de7eb53f8e9d7b06e7e72d not found: ID does not exist" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.691232 4808 scope.go:117] "RemoveContainer" containerID="249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2" Feb 17 16:26:17 crc kubenswrapper[4808]: E0217 16:26:17.697758 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2\": container with ID starting with 249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2 not found: ID does not exist" containerID="249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.698059 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2"} err="failed to get container status \"249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2\": rpc error: code = NotFound desc = could not find container \"249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2\": container with ID starting with 249d45e5ae84178337a0d9ce7ba335223b88500ffe32a4b928144256f92f26e2 not found: ID does not exist" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.698214 4808 scope.go:117] "RemoveContainer" containerID="de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0" Feb 17 16:26:17 crc kubenswrapper[4808]: E0217 16:26:17.700932 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0\": container with ID starting with de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0 not found: ID does not exist" containerID="de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0" Feb 17 16:26:17 crc kubenswrapper[4808]: I0217 16:26:17.701080 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0"} err="failed to get container status \"de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0\": rpc error: code = NotFound desc = could not find container \"de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0\": container with ID starting with de091f3b420c5b774023dbccb9d1b587bc62d5421a964a3454077ef3e32acdc0 not found: ID does not exist" Feb 17 16:26:19 crc kubenswrapper[4808]: I0217 16:26:19.161673 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="071adac8-52ce-4703-a685-252d450e9c18" path="/var/lib/kubelet/pods/071adac8-52ce-4703-a685-252d450e9c18/volumes" Feb 17 16:26:22 crc kubenswrapper[4808]: E0217 16:26:22.148103 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:26:28 crc kubenswrapper[4808]: I0217 16:26:28.064148 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrx8j"] Feb 17 16:26:28 crc kubenswrapper[4808]: I0217 16:26:28.079784 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrx8j"] Feb 17 16:26:29 crc kubenswrapper[4808]: I0217 16:26:29.160491 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a276997e-b8ab-4b5a-ac5f-c21a8114d673" path="/var/lib/kubelet/pods/a276997e-b8ab-4b5a-ac5f-c21a8114d673/volumes" Feb 17 16:26:30 crc kubenswrapper[4808]: E0217 16:26:30.147517 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:26:33 crc kubenswrapper[4808]: E0217 16:26:33.150631 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:26:44 crc kubenswrapper[4808]: E0217 16:26:44.150035 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:26:44 crc kubenswrapper[4808]: E0217 16:26:44.150097 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:26:55 crc kubenswrapper[4808]: E0217 16:26:55.149684 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:26:58 crc kubenswrapper[4808]: E0217 16:26:58.148323 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:27:03 crc kubenswrapper[4808]: I0217 16:27:03.051454 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-lhrsb"] Feb 17 16:27:03 crc kubenswrapper[4808]: I0217 16:27:03.066457 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46chh"] Feb 17 16:27:03 crc kubenswrapper[4808]: I0217 16:27:03.081413 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-46chh"] Feb 17 16:27:03 crc kubenswrapper[4808]: I0217 16:27:03.089446 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-lhrsb"] Feb 17 16:27:03 crc kubenswrapper[4808]: I0217 16:27:03.159239 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3864d41e-915e-4b73-908e-c575d38863e9" path="/var/lib/kubelet/pods/3864d41e-915e-4b73-908e-c575d38863e9/volumes" Feb 17 16:27:03 crc kubenswrapper[4808]: I0217 16:27:03.160359 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d64831b-aec0-42cd-96ec-831ec911d921" path="/var/lib/kubelet/pods/8d64831b-aec0-42cd-96ec-831ec911d921/volumes" Feb 17 16:27:09 crc kubenswrapper[4808]: E0217 16:27:09.148901 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:27:13 crc kubenswrapper[4808]: E0217 16:27:13.148000 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:27:17 crc kubenswrapper[4808]: I0217 16:27:17.297721 4808 scope.go:117] "RemoveContainer" containerID="c7ce5a6ab108ae38e41b41038e16d03130e5c8bb91a8cb5bfd28423f0687dfdc" Feb 17 16:27:17 crc kubenswrapper[4808]: I0217 16:27:17.344687 4808 scope.go:117] "RemoveContainer" containerID="03dd27d0072c98b182eebc081f82c18296cd4cef8a9626830d097fc0caa3a09f" Feb 17 16:27:17 crc kubenswrapper[4808]: I0217 16:27:17.414894 4808 scope.go:117] "RemoveContainer" containerID="531034a194c4af62f0c8e11015f026a45e10d027a70d8384a365f5385731c096" Feb 17 16:27:20 crc kubenswrapper[4808]: E0217 16:27:20.150141 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:27:21 crc kubenswrapper[4808]: I0217 16:27:21.592778 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:27:21 crc kubenswrapper[4808]: I0217 16:27:21.592860 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:27:25 crc kubenswrapper[4808]: E0217 16:27:25.148179 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:27:32 crc kubenswrapper[4808]: E0217 16:27:32.148973 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:27:38 crc kubenswrapper[4808]: E0217 16:27:38.149670 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:27:41 crc kubenswrapper[4808]: I0217 16:27:41.779072 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 17 16:27:44 crc kubenswrapper[4808]: E0217 16:27:44.148731 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:27:51 crc kubenswrapper[4808]: I0217 16:27:51.592049 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:27:51 crc kubenswrapper[4808]: I0217 16:27:51.592718 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:27:52 crc kubenswrapper[4808]: I0217 16:27:52.052052 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-lf98l"] Feb 17 16:27:52 crc kubenswrapper[4808]: I0217 16:27:52.060923 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-lf98l"] Feb 17 16:27:52 crc kubenswrapper[4808]: E0217 16:27:52.148564 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:27:53 crc kubenswrapper[4808]: I0217 16:27:53.159321 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a26947f-ccdc-4726-98dc-a0c08a2a198b" path="/var/lib/kubelet/pods/9a26947f-ccdc-4726-98dc-a0c08a2a198b/volumes" Feb 17 16:27:56 crc kubenswrapper[4808]: E0217 16:27:56.149402 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:28:06 crc kubenswrapper[4808]: E0217 16:28:06.147563 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:28:09 crc kubenswrapper[4808]: E0217 16:28:09.151412 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.879000 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hq4vv"] Feb 17 16:28:13 crc kubenswrapper[4808]: E0217 16:28:13.880749 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="extract-utilities" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.880782 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="extract-utilities" Feb 17 16:28:13 crc kubenswrapper[4808]: E0217 16:28:13.880848 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="registry-server" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.880866 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="registry-server" Feb 17 16:28:13 crc kubenswrapper[4808]: E0217 16:28:13.880915 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="extract-content" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.880938 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="extract-content" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.881484 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="071adac8-52ce-4703-a685-252d450e9c18" containerName="registry-server" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.885336 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.910635 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hq4vv"] Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.938153 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpv9n\" (UniqueName: \"kubernetes.io/projected/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-kube-api-access-fpv9n\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.938234 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-catalog-content\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:13 crc kubenswrapper[4808]: I0217 16:28:13.938334 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-utilities\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.040108 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-utilities\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.040464 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpv9n\" (UniqueName: \"kubernetes.io/projected/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-kube-api-access-fpv9n\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.040559 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-catalog-content\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.040787 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-utilities\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.040983 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-catalog-content\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.062235 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpv9n\" (UniqueName: \"kubernetes.io/projected/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-kube-api-access-fpv9n\") pod \"redhat-operators-hq4vv\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.228817 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.685223 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hq4vv"] Feb 17 16:28:14 crc kubenswrapper[4808]: I0217 16:28:14.868269 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq4vv" event={"ID":"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29","Type":"ContainerStarted","Data":"59f85d534f1c5d5a0ca9234081d8cdc8974975ca244768bed00c00b344466112"} Feb 17 16:28:15 crc kubenswrapper[4808]: I0217 16:28:15.879731 4808 generic.go:334] "Generic (PLEG): container finished" podID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerID="64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b" exitCode=0 Feb 17 16:28:15 crc kubenswrapper[4808]: I0217 16:28:15.879904 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq4vv" event={"ID":"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29","Type":"ContainerDied","Data":"64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b"} Feb 17 16:28:15 crc kubenswrapper[4808]: I0217 16:28:15.882004 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:28:16 crc kubenswrapper[4808]: I0217 16:28:16.896567 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq4vv" event={"ID":"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29","Type":"ContainerStarted","Data":"2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844"} Feb 17 16:28:17 crc kubenswrapper[4808]: I0217 16:28:17.561318 4808 scope.go:117] "RemoveContainer" containerID="af528ab271e814b2015501ad54dc67165447a3cd6d539f4779d4b1f395b9ad79" Feb 17 16:28:18 crc kubenswrapper[4808]: E0217 16:28:18.148442 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:28:20 crc kubenswrapper[4808]: E0217 16:28:20.148570 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.592997 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.593091 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.593161 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.594482 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba9082db1029d7bfb949c1e61cae44b0ec31ca6cae55a6942a3dbac04ecadf0f"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.594658 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://ba9082db1029d7bfb949c1e61cae44b0ec31ca6cae55a6942a3dbac04ecadf0f" gracePeriod=600 Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.955889 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="ba9082db1029d7bfb949c1e61cae44b0ec31ca6cae55a6942a3dbac04ecadf0f" exitCode=0 Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.955988 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"ba9082db1029d7bfb949c1e61cae44b0ec31ca6cae55a6942a3dbac04ecadf0f"} Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.956273 4808 scope.go:117] "RemoveContainer" containerID="3d547770092f773b5c7f62497d5451390c51dc1c958b49576b85d692e046de5d" Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.959593 4808 generic.go:334] "Generic (PLEG): container finished" podID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerID="2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844" exitCode=0 Feb 17 16:28:21 crc kubenswrapper[4808]: I0217 16:28:21.959637 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq4vv" event={"ID":"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29","Type":"ContainerDied","Data":"2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844"} Feb 17 16:28:22 crc kubenswrapper[4808]: I0217 16:28:22.975223 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq4vv" event={"ID":"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29","Type":"ContainerStarted","Data":"d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb"} Feb 17 16:28:22 crc kubenswrapper[4808]: I0217 16:28:22.978061 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22"} Feb 17 16:28:23 crc kubenswrapper[4808]: I0217 16:28:23.018144 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hq4vv" podStartSLOduration=3.467866383 podStartE2EDuration="10.018124021s" podCreationTimestamp="2026-02-17 16:28:13 +0000 UTC" firstStartedPulling="2026-02-17 16:28:15.881532706 +0000 UTC m=+2059.397891799" lastFinishedPulling="2026-02-17 16:28:22.431790364 +0000 UTC m=+2065.948149437" observedRunningTime="2026-02-17 16:28:22.999109314 +0000 UTC m=+2066.515468397" watchObservedRunningTime="2026-02-17 16:28:23.018124021 +0000 UTC m=+2066.534483094" Feb 17 16:28:24 crc kubenswrapper[4808]: I0217 16:28:24.229886 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:24 crc kubenswrapper[4808]: I0217 16:28:24.230390 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:25 crc kubenswrapper[4808]: I0217 16:28:25.282379 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hq4vv" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="registry-server" probeResult="failure" output=< Feb 17 16:28:25 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 16:28:25 crc kubenswrapper[4808]: > Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.027808 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4zchg"] Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.030801 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.042551 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4zchg"] Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.136102 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-utilities\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.136161 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-catalog-content\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.136225 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k69hd\" (UniqueName: \"kubernetes.io/projected/12171d1b-4dea-4358-89cd-ba25b219f753-kube-api-access-k69hd\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.238508 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k69hd\" (UniqueName: \"kubernetes.io/projected/12171d1b-4dea-4358-89cd-ba25b219f753-kube-api-access-k69hd\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.239943 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-utilities\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.239996 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-catalog-content\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.240917 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-utilities\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.241491 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-catalog-content\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.268752 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k69hd\" (UniqueName: \"kubernetes.io/projected/12171d1b-4dea-4358-89cd-ba25b219f753-kube-api-access-k69hd\") pod \"community-operators-4zchg\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.363393 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:27 crc kubenswrapper[4808]: I0217 16:28:27.953110 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4zchg"] Feb 17 16:28:28 crc kubenswrapper[4808]: I0217 16:28:28.029428 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zchg" event={"ID":"12171d1b-4dea-4358-89cd-ba25b219f753","Type":"ContainerStarted","Data":"68f649476e38cbc82b4ba982f39c632fb19bbdf3c243d2c8025176af812aea53"} Feb 17 16:28:29 crc kubenswrapper[4808]: I0217 16:28:29.044554 4808 generic.go:334] "Generic (PLEG): container finished" podID="12171d1b-4dea-4358-89cd-ba25b219f753" containerID="eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a" exitCode=0 Feb 17 16:28:29 crc kubenswrapper[4808]: I0217 16:28:29.044634 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zchg" event={"ID":"12171d1b-4dea-4358-89cd-ba25b219f753","Type":"ContainerDied","Data":"eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a"} Feb 17 16:28:30 crc kubenswrapper[4808]: I0217 16:28:30.056269 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zchg" event={"ID":"12171d1b-4dea-4358-89cd-ba25b219f753","Type":"ContainerStarted","Data":"01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c"} Feb 17 16:28:30 crc kubenswrapper[4808]: E0217 16:28:30.147634 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:28:31 crc kubenswrapper[4808]: E0217 16:28:31.148332 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:28:32 crc kubenswrapper[4808]: I0217 16:28:32.079160 4808 generic.go:334] "Generic (PLEG): container finished" podID="12171d1b-4dea-4358-89cd-ba25b219f753" containerID="01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c" exitCode=0 Feb 17 16:28:32 crc kubenswrapper[4808]: I0217 16:28:32.079270 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zchg" event={"ID":"12171d1b-4dea-4358-89cd-ba25b219f753","Type":"ContainerDied","Data":"01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c"} Feb 17 16:28:33 crc kubenswrapper[4808]: I0217 16:28:33.092069 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zchg" event={"ID":"12171d1b-4dea-4358-89cd-ba25b219f753","Type":"ContainerStarted","Data":"63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3"} Feb 17 16:28:33 crc kubenswrapper[4808]: I0217 16:28:33.118427 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4zchg" podStartSLOduration=2.662289438 podStartE2EDuration="6.118410233s" podCreationTimestamp="2026-02-17 16:28:27 +0000 UTC" firstStartedPulling="2026-02-17 16:28:29.04767989 +0000 UTC m=+2072.564038963" lastFinishedPulling="2026-02-17 16:28:32.503800685 +0000 UTC m=+2076.020159758" observedRunningTime="2026-02-17 16:28:33.113891573 +0000 UTC m=+2076.630250686" watchObservedRunningTime="2026-02-17 16:28:33.118410233 +0000 UTC m=+2076.634769316" Feb 17 16:28:35 crc kubenswrapper[4808]: I0217 16:28:35.296506 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hq4vv" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="registry-server" probeResult="failure" output=< Feb 17 16:28:35 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 16:28:35 crc kubenswrapper[4808]: > Feb 17 16:28:37 crc kubenswrapper[4808]: I0217 16:28:37.363540 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:37 crc kubenswrapper[4808]: I0217 16:28:37.363838 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:37 crc kubenswrapper[4808]: I0217 16:28:37.427116 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:38 crc kubenswrapper[4808]: I0217 16:28:38.202331 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:38 crc kubenswrapper[4808]: I0217 16:28:38.270955 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4zchg"] Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.173342 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4zchg" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="registry-server" containerID="cri-o://63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3" gracePeriod=2 Feb 17 16:28:40 crc kubenswrapper[4808]: E0217 16:28:40.421783 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12171d1b_4dea_4358_89cd_ba25b219f753.slice/crio-63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12171d1b_4dea_4358_89cd_ba25b219f753.slice/crio-conmon-63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3.scope\": RecentStats: unable to find data in memory cache]" Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.788919 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.881490 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-utilities\") pod \"12171d1b-4dea-4358-89cd-ba25b219f753\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.881797 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k69hd\" (UniqueName: \"kubernetes.io/projected/12171d1b-4dea-4358-89cd-ba25b219f753-kube-api-access-k69hd\") pod \"12171d1b-4dea-4358-89cd-ba25b219f753\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.881851 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-catalog-content\") pod \"12171d1b-4dea-4358-89cd-ba25b219f753\" (UID: \"12171d1b-4dea-4358-89cd-ba25b219f753\") " Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.882728 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-utilities" (OuterVolumeSpecName: "utilities") pod "12171d1b-4dea-4358-89cd-ba25b219f753" (UID: "12171d1b-4dea-4358-89cd-ba25b219f753"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.887594 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12171d1b-4dea-4358-89cd-ba25b219f753-kube-api-access-k69hd" (OuterVolumeSpecName: "kube-api-access-k69hd") pod "12171d1b-4dea-4358-89cd-ba25b219f753" (UID: "12171d1b-4dea-4358-89cd-ba25b219f753"). InnerVolumeSpecName "kube-api-access-k69hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.934076 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12171d1b-4dea-4358-89cd-ba25b219f753" (UID: "12171d1b-4dea-4358-89cd-ba25b219f753"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.984911 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.984943 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k69hd\" (UniqueName: \"kubernetes.io/projected/12171d1b-4dea-4358-89cd-ba25b219f753-kube-api-access-k69hd\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:40 crc kubenswrapper[4808]: I0217 16:28:40.984953 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12171d1b-4dea-4358-89cd-ba25b219f753-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.190347 4808 generic.go:334] "Generic (PLEG): container finished" podID="12171d1b-4dea-4358-89cd-ba25b219f753" containerID="63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3" exitCode=0 Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.190454 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4zchg" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.190489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zchg" event={"ID":"12171d1b-4dea-4358-89cd-ba25b219f753","Type":"ContainerDied","Data":"63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3"} Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.191660 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4zchg" event={"ID":"12171d1b-4dea-4358-89cd-ba25b219f753","Type":"ContainerDied","Data":"68f649476e38cbc82b4ba982f39c632fb19bbdf3c243d2c8025176af812aea53"} Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.191701 4808 scope.go:117] "RemoveContainer" containerID="63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.227852 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4zchg"] Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.231768 4808 scope.go:117] "RemoveContainer" containerID="01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.237748 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4zchg"] Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.257720 4808 scope.go:117] "RemoveContainer" containerID="eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.332655 4808 scope.go:117] "RemoveContainer" containerID="63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3" Feb 17 16:28:41 crc kubenswrapper[4808]: E0217 16:28:41.333114 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3\": container with ID starting with 63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3 not found: ID does not exist" containerID="63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.333159 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3"} err="failed to get container status \"63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3\": rpc error: code = NotFound desc = could not find container \"63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3\": container with ID starting with 63b0f13f2686512e6cb3851b56a4c2d66348cef0074cd1e2922ae2c51d2158d3 not found: ID does not exist" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.333193 4808 scope.go:117] "RemoveContainer" containerID="01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c" Feb 17 16:28:41 crc kubenswrapper[4808]: E0217 16:28:41.333860 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c\": container with ID starting with 01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c not found: ID does not exist" containerID="01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.333995 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c"} err="failed to get container status \"01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c\": rpc error: code = NotFound desc = could not find container \"01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c\": container with ID starting with 01886f3aa66694baa8290698092e6055a9b8e9e08c35606c247630e462c5fc6c not found: ID does not exist" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.334102 4808 scope.go:117] "RemoveContainer" containerID="eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a" Feb 17 16:28:41 crc kubenswrapper[4808]: E0217 16:28:41.334718 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a\": container with ID starting with eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a not found: ID does not exist" containerID="eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a" Feb 17 16:28:41 crc kubenswrapper[4808]: I0217 16:28:41.334811 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a"} err="failed to get container status \"eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a\": rpc error: code = NotFound desc = could not find container \"eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a\": container with ID starting with eaab67ade3e6a8ead085c7389c35450cef55e0a08a5aea1cae472285361aeb8a not found: ID does not exist" Feb 17 16:28:43 crc kubenswrapper[4808]: I0217 16:28:43.159201 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" path="/var/lib/kubelet/pods/12171d1b-4dea-4358-89cd-ba25b219f753/volumes" Feb 17 16:28:44 crc kubenswrapper[4808]: I0217 16:28:44.294420 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:44 crc kubenswrapper[4808]: I0217 16:28:44.353929 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:44 crc kubenswrapper[4808]: I0217 16:28:44.612808 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hq4vv"] Feb 17 16:28:45 crc kubenswrapper[4808]: E0217 16:28:45.151683 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:28:45 crc kubenswrapper[4808]: E0217 16:28:45.151702 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.237494 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hq4vv" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="registry-server" containerID="cri-o://d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb" gracePeriod=2 Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.786643 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.897202 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-catalog-content\") pod \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.897378 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpv9n\" (UniqueName: \"kubernetes.io/projected/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-kube-api-access-fpv9n\") pod \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.897471 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-utilities\") pod \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\" (UID: \"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29\") " Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.898688 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-utilities" (OuterVolumeSpecName: "utilities") pod "9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" (UID: "9c5ff0a3-7a28-4be0-bbea-b9058f87ec29"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.915874 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-kube-api-access-fpv9n" (OuterVolumeSpecName: "kube-api-access-fpv9n") pod "9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" (UID: "9c5ff0a3-7a28-4be0-bbea-b9058f87ec29"). InnerVolumeSpecName "kube-api-access-fpv9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.999436 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpv9n\" (UniqueName: \"kubernetes.io/projected/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-kube-api-access-fpv9n\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:46 crc kubenswrapper[4808]: I0217 16:28:46.999465 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.016992 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" (UID: "9c5ff0a3-7a28-4be0-bbea-b9058f87ec29"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.101638 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.250282 4808 generic.go:334] "Generic (PLEG): container finished" podID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerID="d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb" exitCode=0 Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.250331 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq4vv" event={"ID":"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29","Type":"ContainerDied","Data":"d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb"} Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.250363 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hq4vv" event={"ID":"9c5ff0a3-7a28-4be0-bbea-b9058f87ec29","Type":"ContainerDied","Data":"59f85d534f1c5d5a0ca9234081d8cdc8974975ca244768bed00c00b344466112"} Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.250371 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hq4vv" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.250385 4808 scope.go:117] "RemoveContainer" containerID="d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.288941 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hq4vv"] Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.294271 4808 scope.go:117] "RemoveContainer" containerID="2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.300504 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hq4vv"] Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.327823 4808 scope.go:117] "RemoveContainer" containerID="64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.383105 4808 scope.go:117] "RemoveContainer" containerID="d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb" Feb 17 16:28:47 crc kubenswrapper[4808]: E0217 16:28:47.384221 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb\": container with ID starting with d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb not found: ID does not exist" containerID="d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.384290 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb"} err="failed to get container status \"d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb\": rpc error: code = NotFound desc = could not find container \"d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb\": container with ID starting with d05ba129c5fb0f360f858c4b6bc003646deb9e62dc5fce155872b9940a57e4bb not found: ID does not exist" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.384332 4808 scope.go:117] "RemoveContainer" containerID="2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844" Feb 17 16:28:47 crc kubenswrapper[4808]: E0217 16:28:47.384714 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844\": container with ID starting with 2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844 not found: ID does not exist" containerID="2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.384761 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844"} err="failed to get container status \"2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844\": rpc error: code = NotFound desc = could not find container \"2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844\": container with ID starting with 2202ab54cce46501d924080e87b75d03cda4e99f070d52743be88e1707063844 not found: ID does not exist" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.384797 4808 scope.go:117] "RemoveContainer" containerID="64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b" Feb 17 16:28:47 crc kubenswrapper[4808]: E0217 16:28:47.385136 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b\": container with ID starting with 64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b not found: ID does not exist" containerID="64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b" Feb 17 16:28:47 crc kubenswrapper[4808]: I0217 16:28:47.385172 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b"} err="failed to get container status \"64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b\": rpc error: code = NotFound desc = could not find container \"64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b\": container with ID starting with 64e1f84e31293a6c69e3e994952a776bcd04b97b872f856b3844a61cb99b2e6b not found: ID does not exist" Feb 17 16:28:49 crc kubenswrapper[4808]: I0217 16:28:49.168751 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" path="/var/lib/kubelet/pods/9c5ff0a3-7a28-4be0-bbea-b9058f87ec29/volumes" Feb 17 16:28:56 crc kubenswrapper[4808]: E0217 16:28:56.148211 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:28:56 crc kubenswrapper[4808]: E0217 16:28:56.148402 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:29:07 crc kubenswrapper[4808]: E0217 16:29:07.178484 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:29:08 crc kubenswrapper[4808]: E0217 16:29:08.147827 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:29:18 crc kubenswrapper[4808]: E0217 16:29:18.288877 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:29:18 crc kubenswrapper[4808]: E0217 16:29:18.289450 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:29:18 crc kubenswrapper[4808]: E0217 16:29:18.289616 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:29:18 crc kubenswrapper[4808]: E0217 16:29:18.290849 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:29:19 crc kubenswrapper[4808]: E0217 16:29:19.146682 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:29:32 crc kubenswrapper[4808]: E0217 16:29:32.149377 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:29:34 crc kubenswrapper[4808]: E0217 16:29:34.147809 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:29:43 crc kubenswrapper[4808]: E0217 16:29:43.148240 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:29:46 crc kubenswrapper[4808]: E0217 16:29:46.278999 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:29:46 crc kubenswrapper[4808]: E0217 16:29:46.279902 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:29:46 crc kubenswrapper[4808]: E0217 16:29:46.280122 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:29:46 crc kubenswrapper[4808]: E0217 16:29:46.281484 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:29:51 crc kubenswrapper[4808]: I0217 16:29:51.566683 4808 generic.go:334] "Generic (PLEG): container finished" podID="2084629b-ffd4-4f5e-8db7-070d4a08dd8e" containerID="92e6ef387cf41dd71a851ea483493cf05b8666e2889e1132cbfb6ad483176127" exitCode=2 Feb 17 16:29:51 crc kubenswrapper[4808]: I0217 16:29:51.566784 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" event={"ID":"2084629b-ffd4-4f5e-8db7-070d4a08dd8e","Type":"ContainerDied","Data":"92e6ef387cf41dd71a851ea483493cf05b8666e2889e1132cbfb6ad483176127"} Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.137018 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.234124 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdfxv\" (UniqueName: \"kubernetes.io/projected/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-kube-api-access-kdfxv\") pod \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.234322 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-inventory\") pod \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.234485 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-ssh-key-openstack-edpm-ipam\") pod \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\" (UID: \"2084629b-ffd4-4f5e-8db7-070d4a08dd8e\") " Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.243772 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-kube-api-access-kdfxv" (OuterVolumeSpecName: "kube-api-access-kdfxv") pod "2084629b-ffd4-4f5e-8db7-070d4a08dd8e" (UID: "2084629b-ffd4-4f5e-8db7-070d4a08dd8e"). InnerVolumeSpecName "kube-api-access-kdfxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.267140 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2084629b-ffd4-4f5e-8db7-070d4a08dd8e" (UID: "2084629b-ffd4-4f5e-8db7-070d4a08dd8e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.272444 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-inventory" (OuterVolumeSpecName: "inventory") pod "2084629b-ffd4-4f5e-8db7-070d4a08dd8e" (UID: "2084629b-ffd4-4f5e-8db7-070d4a08dd8e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.338882 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.338914 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.338926 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdfxv\" (UniqueName: \"kubernetes.io/projected/2084629b-ffd4-4f5e-8db7-070d4a08dd8e-kube-api-access-kdfxv\") on node \"crc\" DevicePath \"\"" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.589456 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" event={"ID":"2084629b-ffd4-4f5e-8db7-070d4a08dd8e","Type":"ContainerDied","Data":"b7f31d0387d770241189aacd0771c827ab5a7b271e4e7dcc1efa78c199758ae8"} Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.589530 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7f31d0387d770241189aacd0771c827ab5a7b271e4e7dcc1efa78c199758ae8" Feb 17 16:29:53 crc kubenswrapper[4808]: I0217 16:29:53.589614 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-sjckt" Feb 17 16:29:54 crc kubenswrapper[4808]: E0217 16:29:54.147863 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:29:57 crc kubenswrapper[4808]: E0217 16:29:57.161295 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.167954 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b"] Feb 17 16:30:00 crc kubenswrapper[4808]: E0217 16:30:00.169279 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="extract-utilities" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.169303 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="extract-utilities" Feb 17 16:30:00 crc kubenswrapper[4808]: E0217 16:30:00.169333 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="registry-server" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.169345 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="registry-server" Feb 17 16:30:00 crc kubenswrapper[4808]: E0217 16:30:00.169365 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="extract-content" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.169377 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="extract-content" Feb 17 16:30:00 crc kubenswrapper[4808]: E0217 16:30:00.169413 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="extract-utilities" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.169452 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="extract-utilities" Feb 17 16:30:00 crc kubenswrapper[4808]: E0217 16:30:00.169484 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="extract-content" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.169496 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="extract-content" Feb 17 16:30:00 crc kubenswrapper[4808]: E0217 16:30:00.169519 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2084629b-ffd4-4f5e-8db7-070d4a08dd8e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.169532 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="2084629b-ffd4-4f5e-8db7-070d4a08dd8e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:30:00 crc kubenswrapper[4808]: E0217 16:30:00.169553 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="registry-server" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.169565 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="registry-server" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.187835 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c5ff0a3-7a28-4be0-bbea-b9058f87ec29" containerName="registry-server" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.187953 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="12171d1b-4dea-4358-89cd-ba25b219f753" containerName="registry-server" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.187981 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="2084629b-ffd4-4f5e-8db7-070d4a08dd8e" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.189220 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.193691 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.196770 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.212844 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b"] Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.298427 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9bvw\" (UniqueName: \"kubernetes.io/projected/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-kube-api-access-z9bvw\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.298558 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-config-volume\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.300317 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-secret-volume\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.402362 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-secret-volume\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.402431 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9bvw\" (UniqueName: \"kubernetes.io/projected/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-kube-api-access-z9bvw\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.402506 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-config-volume\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.403561 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-config-volume\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.408995 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-secret-volume\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.424301 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9bvw\" (UniqueName: \"kubernetes.io/projected/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-kube-api-access-z9bvw\") pod \"collect-profiles-29522430-jhp9b\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:00 crc kubenswrapper[4808]: I0217 16:30:00.524920 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.040952 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz"] Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.043213 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.045544 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.045599 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.045830 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.045871 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.057314 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz"] Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.220829 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.220980 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmcgj\" (UniqueName: \"kubernetes.io/projected/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-kube-api-access-cmcgj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.221059 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.281413 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b"] Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.322995 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.323425 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.324792 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmcgj\" (UniqueName: \"kubernetes.io/projected/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-kube-api-access-cmcgj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.329996 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.330366 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.342741 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmcgj\" (UniqueName: \"kubernetes.io/projected/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-kube-api-access-cmcgj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.366183 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.775446 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" event={"ID":"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf","Type":"ContainerStarted","Data":"c5ba79dcf1a3ea436f18f622b5a896f04d2d690a78e981b12dc981865c236bbe"} Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.775831 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" event={"ID":"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf","Type":"ContainerStarted","Data":"760de8bd8d09554dd73353da29e851042c810b009a003ac5e43d970dec207854"} Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.810023 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" podStartSLOduration=1.809996564 podStartE2EDuration="1.809996564s" podCreationTimestamp="2026-02-17 16:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 16:30:01.79555796 +0000 UTC m=+2165.311917043" watchObservedRunningTime="2026-02-17 16:30:01.809996564 +0000 UTC m=+2165.326355647" Feb 17 16:30:01 crc kubenswrapper[4808]: W0217 16:30:01.911556 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod486d1a55_6cee_4d24_ab2b_5c5c61c6d3d3.slice/crio-7f46c1a26483e6a88332ba91471836d6c5c7e3122663fd45f8f638555de77a90 WatchSource:0}: Error finding container 7f46c1a26483e6a88332ba91471836d6c5c7e3122663fd45f8f638555de77a90: Status 404 returned error can't find the container with id 7f46c1a26483e6a88332ba91471836d6c5c7e3122663fd45f8f638555de77a90 Feb 17 16:30:01 crc kubenswrapper[4808]: I0217 16:30:01.921343 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz"] Feb 17 16:30:02 crc kubenswrapper[4808]: I0217 16:30:02.784429 4808 generic.go:334] "Generic (PLEG): container finished" podID="e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" containerID="c5ba79dcf1a3ea436f18f622b5a896f04d2d690a78e981b12dc981865c236bbe" exitCode=0 Feb 17 16:30:02 crc kubenswrapper[4808]: I0217 16:30:02.784542 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" event={"ID":"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf","Type":"ContainerDied","Data":"c5ba79dcf1a3ea436f18f622b5a896f04d2d690a78e981b12dc981865c236bbe"} Feb 17 16:30:02 crc kubenswrapper[4808]: I0217 16:30:02.786287 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" event={"ID":"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3","Type":"ContainerStarted","Data":"8411ed95197c32b6e4edaeead95a670ced65c70f3a3592064db86f9a1b81cf5a"} Feb 17 16:30:02 crc kubenswrapper[4808]: I0217 16:30:02.786330 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" event={"ID":"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3","Type":"ContainerStarted","Data":"7f46c1a26483e6a88332ba91471836d6c5c7e3122663fd45f8f638555de77a90"} Feb 17 16:30:02 crc kubenswrapper[4808]: I0217 16:30:02.818241 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" podStartSLOduration=1.409963092 podStartE2EDuration="1.818223865s" podCreationTimestamp="2026-02-17 16:30:01 +0000 UTC" firstStartedPulling="2026-02-17 16:30:01.916905851 +0000 UTC m=+2165.433264924" lastFinishedPulling="2026-02-17 16:30:02.325166604 +0000 UTC m=+2165.841525697" observedRunningTime="2026-02-17 16:30:02.813684694 +0000 UTC m=+2166.330043777" watchObservedRunningTime="2026-02-17 16:30:02.818223865 +0000 UTC m=+2166.334582928" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.313403 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.407570 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9bvw\" (UniqueName: \"kubernetes.io/projected/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-kube-api-access-z9bvw\") pod \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.407878 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-config-volume\") pod \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.408002 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-secret-volume\") pod \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\" (UID: \"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf\") " Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.408491 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-config-volume" (OuterVolumeSpecName: "config-volume") pod "e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" (UID: "e5f89f01-6a5d-4eb4-adc9-cbfbd921accf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.409129 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.413330 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" (UID: "e5f89f01-6a5d-4eb4-adc9-cbfbd921accf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.415803 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-kube-api-access-z9bvw" (OuterVolumeSpecName: "kube-api-access-z9bvw") pod "e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" (UID: "e5f89f01-6a5d-4eb4-adc9-cbfbd921accf"). InnerVolumeSpecName "kube-api-access-z9bvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.510918 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.510952 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9bvw\" (UniqueName: \"kubernetes.io/projected/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf-kube-api-access-z9bvw\") on node \"crc\" DevicePath \"\"" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.814845 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" event={"ID":"e5f89f01-6a5d-4eb4-adc9-cbfbd921accf","Type":"ContainerDied","Data":"760de8bd8d09554dd73353da29e851042c810b009a003ac5e43d970dec207854"} Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.814891 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b" Feb 17 16:30:04 crc kubenswrapper[4808]: I0217 16:30:04.814916 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="760de8bd8d09554dd73353da29e851042c810b009a003ac5e43d970dec207854" Feb 17 16:30:05 crc kubenswrapper[4808]: I0217 16:30:05.383486 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr"] Feb 17 16:30:05 crc kubenswrapper[4808]: I0217 16:30:05.390851 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522385-74pvr"] Feb 17 16:30:07 crc kubenswrapper[4808]: E0217 16:30:07.156887 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:30:07 crc kubenswrapper[4808]: I0217 16:30:07.173883 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7baa3ebb-6bb0-4744-b096-971958bcd263" path="/var/lib/kubelet/pods/7baa3ebb-6bb0-4744-b096-971958bcd263/volumes" Feb 17 16:30:10 crc kubenswrapper[4808]: E0217 16:30:10.149018 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:30:17 crc kubenswrapper[4808]: I0217 16:30:17.728939 4808 scope.go:117] "RemoveContainer" containerID="4636e3a05a4f1b63b0a37839e73e790b55d96dd321273848e2dfb3f38193ea44" Feb 17 16:30:19 crc kubenswrapper[4808]: E0217 16:30:19.149803 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:30:21 crc kubenswrapper[4808]: E0217 16:30:21.148431 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:30:32 crc kubenswrapper[4808]: E0217 16:30:32.147549 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:30:36 crc kubenswrapper[4808]: E0217 16:30:36.149255 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:30:45 crc kubenswrapper[4808]: E0217 16:30:45.147471 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:30:51 crc kubenswrapper[4808]: E0217 16:30:51.149276 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:30:51 crc kubenswrapper[4808]: I0217 16:30:51.591858 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:30:51 crc kubenswrapper[4808]: I0217 16:30:51.591927 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:31:00 crc kubenswrapper[4808]: E0217 16:31:00.149010 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:31:02 crc kubenswrapper[4808]: E0217 16:31:02.147831 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:31:12 crc kubenswrapper[4808]: E0217 16:31:12.148846 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:31:14 crc kubenswrapper[4808]: E0217 16:31:14.148656 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:31:21 crc kubenswrapper[4808]: I0217 16:31:21.592622 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:31:21 crc kubenswrapper[4808]: I0217 16:31:21.593219 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:31:23 crc kubenswrapper[4808]: E0217 16:31:23.148434 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:31:26 crc kubenswrapper[4808]: E0217 16:31:26.150261 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:31:36 crc kubenswrapper[4808]: E0217 16:31:36.147648 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:31:40 crc kubenswrapper[4808]: E0217 16:31:40.148213 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:31:51 crc kubenswrapper[4808]: E0217 16:31:51.148342 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.591677 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.591730 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.591805 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.592557 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.592631 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" gracePeriod=600 Feb 17 16:31:51 crc kubenswrapper[4808]: E0217 16:31:51.733145 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.966533 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" exitCode=0 Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.966570 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22"} Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.966630 4808 scope.go:117] "RemoveContainer" containerID="ba9082db1029d7bfb949c1e61cae44b0ec31ca6cae55a6942a3dbac04ecadf0f" Feb 17 16:31:51 crc kubenswrapper[4808]: I0217 16:31:51.967244 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:31:51 crc kubenswrapper[4808]: E0217 16:31:51.967469 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:31:54 crc kubenswrapper[4808]: E0217 16:31:54.148007 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:32:03 crc kubenswrapper[4808]: I0217 16:32:03.146442 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:32:03 crc kubenswrapper[4808]: E0217 16:32:03.147873 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:32:05 crc kubenswrapper[4808]: E0217 16:32:05.148762 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:32:07 crc kubenswrapper[4808]: E0217 16:32:07.157964 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:32:17 crc kubenswrapper[4808]: I0217 16:32:17.154097 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:32:17 crc kubenswrapper[4808]: E0217 16:32:17.155116 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:32:18 crc kubenswrapper[4808]: E0217 16:32:18.148843 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:32:19 crc kubenswrapper[4808]: E0217 16:32:19.147508 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:32:31 crc kubenswrapper[4808]: I0217 16:32:31.146481 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:32:31 crc kubenswrapper[4808]: E0217 16:32:31.147175 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:32:33 crc kubenswrapper[4808]: E0217 16:32:33.149042 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:32:33 crc kubenswrapper[4808]: E0217 16:32:33.152091 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:32:43 crc kubenswrapper[4808]: I0217 16:32:43.148079 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:32:43 crc kubenswrapper[4808]: E0217 16:32:43.149406 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:32:45 crc kubenswrapper[4808]: E0217 16:32:45.150933 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:32:46 crc kubenswrapper[4808]: E0217 16:32:46.147771 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:32:56 crc kubenswrapper[4808]: E0217 16:32:56.148512 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:32:57 crc kubenswrapper[4808]: I0217 16:32:57.152157 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:32:57 crc kubenswrapper[4808]: E0217 16:32:57.152429 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:33:01 crc kubenswrapper[4808]: E0217 16:33:01.148797 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:33:09 crc kubenswrapper[4808]: I0217 16:33:09.146060 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:33:09 crc kubenswrapper[4808]: E0217 16:33:09.147084 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:33:09 crc kubenswrapper[4808]: E0217 16:33:09.148842 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:33:15 crc kubenswrapper[4808]: E0217 16:33:15.151542 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:33:22 crc kubenswrapper[4808]: I0217 16:33:22.145833 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:33:22 crc kubenswrapper[4808]: E0217 16:33:22.147195 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:33:24 crc kubenswrapper[4808]: E0217 16:33:24.149383 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:33:26 crc kubenswrapper[4808]: E0217 16:33:26.148629 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:33:35 crc kubenswrapper[4808]: E0217 16:33:35.148237 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:33:36 crc kubenswrapper[4808]: I0217 16:33:36.146511 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:33:36 crc kubenswrapper[4808]: E0217 16:33:36.147107 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:33:41 crc kubenswrapper[4808]: E0217 16:33:41.154095 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:33:48 crc kubenswrapper[4808]: I0217 16:33:48.146985 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:33:48 crc kubenswrapper[4808]: E0217 16:33:48.147900 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:33:49 crc kubenswrapper[4808]: E0217 16:33:49.147848 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:33:53 crc kubenswrapper[4808]: E0217 16:33:53.148240 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:34:00 crc kubenswrapper[4808]: I0217 16:34:00.146362 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:34:00 crc kubenswrapper[4808]: E0217 16:34:00.146990 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:34:03 crc kubenswrapper[4808]: E0217 16:34:03.147998 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:34:04 crc kubenswrapper[4808]: E0217 16:34:04.147637 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:34:14 crc kubenswrapper[4808]: I0217 16:34:14.146390 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:34:14 crc kubenswrapper[4808]: E0217 16:34:14.147308 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:34:15 crc kubenswrapper[4808]: E0217 16:34:15.150951 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:34:18 crc kubenswrapper[4808]: E0217 16:34:18.149767 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:34:26 crc kubenswrapper[4808]: I0217 16:34:26.149237 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:34:26 crc kubenswrapper[4808]: E0217 16:34:26.274075 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:34:26 crc kubenswrapper[4808]: E0217 16:34:26.274466 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:34:26 crc kubenswrapper[4808]: E0217 16:34:26.274650 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:34:26 crc kubenswrapper[4808]: E0217 16:34:26.276232 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:34:29 crc kubenswrapper[4808]: I0217 16:34:29.147199 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:34:29 crc kubenswrapper[4808]: E0217 16:34:29.147974 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:34:32 crc kubenswrapper[4808]: E0217 16:34:32.147067 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:34:38 crc kubenswrapper[4808]: E0217 16:34:38.148831 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:34:41 crc kubenswrapper[4808]: I0217 16:34:41.145760 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:34:41 crc kubenswrapper[4808]: E0217 16:34:41.146459 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:34:46 crc kubenswrapper[4808]: E0217 16:34:46.149705 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:34:49 crc kubenswrapper[4808]: E0217 16:34:49.149011 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:34:52 crc kubenswrapper[4808]: I0217 16:34:52.146336 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:34:52 crc kubenswrapper[4808]: E0217 16:34:52.146731 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:35:01 crc kubenswrapper[4808]: E0217 16:35:01.149454 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:35:01 crc kubenswrapper[4808]: E0217 16:35:01.268160 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:35:01 crc kubenswrapper[4808]: E0217 16:35:01.268257 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:35:01 crc kubenswrapper[4808]: E0217 16:35:01.268422 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:35:01 crc kubenswrapper[4808]: E0217 16:35:01.269714 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:35:03 crc kubenswrapper[4808]: I0217 16:35:03.145706 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:35:03 crc kubenswrapper[4808]: E0217 16:35:03.146437 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:35:13 crc kubenswrapper[4808]: E0217 16:35:13.148310 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:35:16 crc kubenswrapper[4808]: I0217 16:35:16.146540 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:35:16 crc kubenswrapper[4808]: E0217 16:35:16.147210 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:35:16 crc kubenswrapper[4808]: E0217 16:35:16.155435 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:35:26 crc kubenswrapper[4808]: E0217 16:35:26.149318 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:35:27 crc kubenswrapper[4808]: I0217 16:35:27.162266 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:35:27 crc kubenswrapper[4808]: E0217 16:35:27.164285 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:35:31 crc kubenswrapper[4808]: E0217 16:35:31.148107 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:35:41 crc kubenswrapper[4808]: E0217 16:35:41.149668 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:35:42 crc kubenswrapper[4808]: I0217 16:35:42.146423 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:35:42 crc kubenswrapper[4808]: E0217 16:35:42.147107 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:35:45 crc kubenswrapper[4808]: E0217 16:35:45.148812 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:35:55 crc kubenswrapper[4808]: E0217 16:35:55.148477 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:35:57 crc kubenswrapper[4808]: I0217 16:35:57.152078 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:35:57 crc kubenswrapper[4808]: E0217 16:35:57.152693 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:35:58 crc kubenswrapper[4808]: E0217 16:35:58.150156 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:36:08 crc kubenswrapper[4808]: I0217 16:36:08.146294 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:36:08 crc kubenswrapper[4808]: E0217 16:36:08.147101 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:36:10 crc kubenswrapper[4808]: E0217 16:36:10.149348 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.350235 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t6krv"] Feb 17 16:36:12 crc kubenswrapper[4808]: E0217 16:36:12.350954 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" containerName="collect-profiles" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.350969 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" containerName="collect-profiles" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.351215 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" containerName="collect-profiles" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.353260 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.363731 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6krv"] Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.452538 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-utilities\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.452629 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwnt9\" (UniqueName: \"kubernetes.io/projected/33cc2cac-9faa-4273-905f-128750f10c80-kube-api-access-xwnt9\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.452683 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-catalog-content\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.554969 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-utilities\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.555092 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwnt9\" (UniqueName: \"kubernetes.io/projected/33cc2cac-9faa-4273-905f-128750f10c80-kube-api-access-xwnt9\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.555161 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-catalog-content\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.555542 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-utilities\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.555554 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-catalog-content\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.578414 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwnt9\" (UniqueName: \"kubernetes.io/projected/33cc2cac-9faa-4273-905f-128750f10c80-kube-api-access-xwnt9\") pod \"certified-operators-t6krv\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:12 crc kubenswrapper[4808]: I0217 16:36:12.683442 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:13 crc kubenswrapper[4808]: E0217 16:36:13.161063 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:36:13 crc kubenswrapper[4808]: I0217 16:36:13.357973 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t6krv"] Feb 17 16:36:13 crc kubenswrapper[4808]: I0217 16:36:13.841125 4808 generic.go:334] "Generic (PLEG): container finished" podID="486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3" containerID="8411ed95197c32b6e4edaeead95a670ced65c70f3a3592064db86f9a1b81cf5a" exitCode=2 Feb 17 16:36:13 crc kubenswrapper[4808]: I0217 16:36:13.841221 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" event={"ID":"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3","Type":"ContainerDied","Data":"8411ed95197c32b6e4edaeead95a670ced65c70f3a3592064db86f9a1b81cf5a"} Feb 17 16:36:13 crc kubenswrapper[4808]: I0217 16:36:13.847135 4808 generic.go:334] "Generic (PLEG): container finished" podID="33cc2cac-9faa-4273-905f-128750f10c80" containerID="6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd" exitCode=0 Feb 17 16:36:13 crc kubenswrapper[4808]: I0217 16:36:13.847189 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6krv" event={"ID":"33cc2cac-9faa-4273-905f-128750f10c80","Type":"ContainerDied","Data":"6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd"} Feb 17 16:36:13 crc kubenswrapper[4808]: I0217 16:36:13.847220 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6krv" event={"ID":"33cc2cac-9faa-4273-905f-128750f10c80","Type":"ContainerStarted","Data":"2438c932894e0e169fd6358da543273050a3355916f7007c304f5b2829473875"} Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.398403 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.428101 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmcgj\" (UniqueName: \"kubernetes.io/projected/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-kube-api-access-cmcgj\") pod \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.428366 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-ssh-key-openstack-edpm-ipam\") pod \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.428395 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-inventory\") pod \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\" (UID: \"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3\") " Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.469220 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-kube-api-access-cmcgj" (OuterVolumeSpecName: "kube-api-access-cmcgj") pod "486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3" (UID: "486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3"). InnerVolumeSpecName "kube-api-access-cmcgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.482773 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-inventory" (OuterVolumeSpecName: "inventory") pod "486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3" (UID: "486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.496091 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3" (UID: "486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.535313 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmcgj\" (UniqueName: \"kubernetes.io/projected/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-kube-api-access-cmcgj\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.535350 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.535361 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.869818 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6krv" event={"ID":"33cc2cac-9faa-4273-905f-128750f10c80","Type":"ContainerStarted","Data":"077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4"} Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.872023 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" event={"ID":"486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3","Type":"ContainerDied","Data":"7f46c1a26483e6a88332ba91471836d6c5c7e3122663fd45f8f638555de77a90"} Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.872048 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f46c1a26483e6a88332ba91471836d6c5c7e3122663fd45f8f638555de77a90" Feb 17 16:36:15 crc kubenswrapper[4808]: I0217 16:36:15.872092 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz" Feb 17 16:36:16 crc kubenswrapper[4808]: I0217 16:36:16.887196 4808 generic.go:334] "Generic (PLEG): container finished" podID="33cc2cac-9faa-4273-905f-128750f10c80" containerID="077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4" exitCode=0 Feb 17 16:36:16 crc kubenswrapper[4808]: I0217 16:36:16.887269 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6krv" event={"ID":"33cc2cac-9faa-4273-905f-128750f10c80","Type":"ContainerDied","Data":"077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4"} Feb 17 16:36:17 crc kubenswrapper[4808]: I0217 16:36:17.902868 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6krv" event={"ID":"33cc2cac-9faa-4273-905f-128750f10c80","Type":"ContainerStarted","Data":"5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b"} Feb 17 16:36:17 crc kubenswrapper[4808]: I0217 16:36:17.923229 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t6krv" podStartSLOduration=2.221646902 podStartE2EDuration="5.923204526s" podCreationTimestamp="2026-02-17 16:36:12 +0000 UTC" firstStartedPulling="2026-02-17 16:36:13.849877092 +0000 UTC m=+2537.366236165" lastFinishedPulling="2026-02-17 16:36:17.551434726 +0000 UTC m=+2541.067793789" observedRunningTime="2026-02-17 16:36:17.918971882 +0000 UTC m=+2541.435330955" watchObservedRunningTime="2026-02-17 16:36:17.923204526 +0000 UTC m=+2541.439563609" Feb 17 16:36:20 crc kubenswrapper[4808]: I0217 16:36:20.145691 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:36:20 crc kubenswrapper[4808]: E0217 16:36:20.146226 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:36:22 crc kubenswrapper[4808]: E0217 16:36:22.147690 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:36:22 crc kubenswrapper[4808]: I0217 16:36:22.683682 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:22 crc kubenswrapper[4808]: I0217 16:36:22.684028 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:22 crc kubenswrapper[4808]: I0217 16:36:22.734858 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:23 crc kubenswrapper[4808]: I0217 16:36:23.006102 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:23 crc kubenswrapper[4808]: I0217 16:36:23.063941 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6krv"] Feb 17 16:36:24 crc kubenswrapper[4808]: E0217 16:36:24.148318 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:36:24 crc kubenswrapper[4808]: I0217 16:36:24.971976 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t6krv" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="registry-server" containerID="cri-o://5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b" gracePeriod=2 Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.580257 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.740801 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwnt9\" (UniqueName: \"kubernetes.io/projected/33cc2cac-9faa-4273-905f-128750f10c80-kube-api-access-xwnt9\") pod \"33cc2cac-9faa-4273-905f-128750f10c80\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.741471 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-catalog-content\") pod \"33cc2cac-9faa-4273-905f-128750f10c80\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.741743 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-utilities\") pod \"33cc2cac-9faa-4273-905f-128750f10c80\" (UID: \"33cc2cac-9faa-4273-905f-128750f10c80\") " Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.743243 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-utilities" (OuterVolumeSpecName: "utilities") pod "33cc2cac-9faa-4273-905f-128750f10c80" (UID: "33cc2cac-9faa-4273-905f-128750f10c80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.747438 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33cc2cac-9faa-4273-905f-128750f10c80-kube-api-access-xwnt9" (OuterVolumeSpecName: "kube-api-access-xwnt9") pod "33cc2cac-9faa-4273-905f-128750f10c80" (UID: "33cc2cac-9faa-4273-905f-128750f10c80"). InnerVolumeSpecName "kube-api-access-xwnt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.844412 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.844460 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwnt9\" (UniqueName: \"kubernetes.io/projected/33cc2cac-9faa-4273-905f-128750f10c80-kube-api-access-xwnt9\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.964667 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33cc2cac-9faa-4273-905f-128750f10c80" (UID: "33cc2cac-9faa-4273-905f-128750f10c80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.988141 4808 generic.go:334] "Generic (PLEG): container finished" podID="33cc2cac-9faa-4273-905f-128750f10c80" containerID="5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b" exitCode=0 Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.988196 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6krv" event={"ID":"33cc2cac-9faa-4273-905f-128750f10c80","Type":"ContainerDied","Data":"5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b"} Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.988229 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t6krv" event={"ID":"33cc2cac-9faa-4273-905f-128750f10c80","Type":"ContainerDied","Data":"2438c932894e0e169fd6358da543273050a3355916f7007c304f5b2829473875"} Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.988251 4808 scope.go:117] "RemoveContainer" containerID="5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b" Feb 17 16:36:25 crc kubenswrapper[4808]: I0217 16:36:25.988464 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t6krv" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.018873 4808 scope.go:117] "RemoveContainer" containerID="077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.027413 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t6krv"] Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.036057 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t6krv"] Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.045465 4808 scope.go:117] "RemoveContainer" containerID="6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.048829 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33cc2cac-9faa-4273-905f-128750f10c80-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.089769 4808 scope.go:117] "RemoveContainer" containerID="5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b" Feb 17 16:36:26 crc kubenswrapper[4808]: E0217 16:36:26.090282 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b\": container with ID starting with 5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b not found: ID does not exist" containerID="5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.090335 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b"} err="failed to get container status \"5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b\": rpc error: code = NotFound desc = could not find container \"5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b\": container with ID starting with 5457c7bcaeafa118d11c137a8052169d692ab4250cc7b69cced9a3c2c6e6084b not found: ID does not exist" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.090366 4808 scope.go:117] "RemoveContainer" containerID="077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4" Feb 17 16:36:26 crc kubenswrapper[4808]: E0217 16:36:26.091144 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4\": container with ID starting with 077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4 not found: ID does not exist" containerID="077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.091201 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4"} err="failed to get container status \"077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4\": rpc error: code = NotFound desc = could not find container \"077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4\": container with ID starting with 077e44c9a2f154dd65f7667cdaea0a5343ca52a9523d095319a495c5f5c86dd4 not found: ID does not exist" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.091236 4808 scope.go:117] "RemoveContainer" containerID="6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd" Feb 17 16:36:26 crc kubenswrapper[4808]: E0217 16:36:26.091715 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd\": container with ID starting with 6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd not found: ID does not exist" containerID="6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd" Feb 17 16:36:26 crc kubenswrapper[4808]: I0217 16:36:26.091761 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd"} err="failed to get container status \"6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd\": rpc error: code = NotFound desc = could not find container \"6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd\": container with ID starting with 6c0f46d7c8aa34df68f09873dff14de5301f914b39e6b9525c0c8e733141a7dd not found: ID does not exist" Feb 17 16:36:27 crc kubenswrapper[4808]: I0217 16:36:27.168228 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33cc2cac-9faa-4273-905f-128750f10c80" path="/var/lib/kubelet/pods/33cc2cac-9faa-4273-905f-128750f10c80/volumes" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.041075 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8"] Feb 17 16:36:33 crc kubenswrapper[4808]: E0217 16:36:33.041851 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="registry-server" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.041865 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="registry-server" Feb 17 16:36:33 crc kubenswrapper[4808]: E0217 16:36:33.041879 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="extract-content" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.041885 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="extract-content" Feb 17 16:36:33 crc kubenswrapper[4808]: E0217 16:36:33.041914 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="extract-utilities" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.041921 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="extract-utilities" Feb 17 16:36:33 crc kubenswrapper[4808]: E0217 16:36:33.041933 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.041940 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.042169 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.042196 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="33cc2cac-9faa-4273-905f-128750f10c80" containerName="registry-server" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.043079 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.050261 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.050940 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.051299 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.056330 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.056558 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8"] Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.110160 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.110236 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.110271 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf9ss\" (UniqueName: \"kubernetes.io/projected/c51156c6-7d2b-4871-9ae0-963c4eb67454-kube-api-access-nf9ss\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.146102 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:36:33 crc kubenswrapper[4808]: E0217 16:36:33.146442 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.211131 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.212086 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.212195 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf9ss\" (UniqueName: \"kubernetes.io/projected/c51156c6-7d2b-4871-9ae0-963c4eb67454-kube-api-access-nf9ss\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.216438 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.216623 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.233225 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf9ss\" (UniqueName: \"kubernetes.io/projected/c51156c6-7d2b-4871-9ae0-963c4eb67454-kube-api-access-nf9ss\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.388284 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:36:33 crc kubenswrapper[4808]: I0217 16:36:33.939744 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8"] Feb 17 16:36:34 crc kubenswrapper[4808]: I0217 16:36:34.073509 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" event={"ID":"c51156c6-7d2b-4871-9ae0-963c4eb67454","Type":"ContainerStarted","Data":"0bd0464d30a220d6d00def18b5261451af4eeafffd898c8b5ae55cfbfb63623f"} Feb 17 16:36:34 crc kubenswrapper[4808]: E0217 16:36:34.148736 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:36:35 crc kubenswrapper[4808]: I0217 16:36:35.084313 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" event={"ID":"c51156c6-7d2b-4871-9ae0-963c4eb67454","Type":"ContainerStarted","Data":"65dafe8a1101f4ddfb7e0bce9d223f707cac8bd45bd857f95672b3b349fe2857"} Feb 17 16:36:35 crc kubenswrapper[4808]: I0217 16:36:35.109970 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" podStartSLOduration=1.540102377 podStartE2EDuration="2.109953124s" podCreationTimestamp="2026-02-17 16:36:33 +0000 UTC" firstStartedPulling="2026-02-17 16:36:33.944832202 +0000 UTC m=+2557.461191275" lastFinishedPulling="2026-02-17 16:36:34.514682939 +0000 UTC m=+2558.031042022" observedRunningTime="2026-02-17 16:36:35.102287628 +0000 UTC m=+2558.618646701" watchObservedRunningTime="2026-02-17 16:36:35.109953124 +0000 UTC m=+2558.626312197" Feb 17 16:36:39 crc kubenswrapper[4808]: E0217 16:36:39.150113 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:36:47 crc kubenswrapper[4808]: E0217 16:36:47.158035 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:36:48 crc kubenswrapper[4808]: I0217 16:36:48.166263 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:36:48 crc kubenswrapper[4808]: E0217 16:36:48.166952 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:36:50 crc kubenswrapper[4808]: E0217 16:36:50.148717 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:37:00 crc kubenswrapper[4808]: I0217 16:37:00.851814 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:37:00 crc kubenswrapper[4808]: E0217 16:37:00.874119 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:37:01 crc kubenswrapper[4808]: I0217 16:37:01.892253 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"7e8601a98b232938835916b07f525ce196aee0ee01e8ee4ec9de824633712b8d"} Feb 17 16:37:03 crc kubenswrapper[4808]: E0217 16:37:03.151431 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:37:12 crc kubenswrapper[4808]: E0217 16:37:12.149122 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:37:15 crc kubenswrapper[4808]: E0217 16:37:15.148812 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:37:26 crc kubenswrapper[4808]: E0217 16:37:26.149073 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:37:29 crc kubenswrapper[4808]: E0217 16:37:29.151531 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:37:37 crc kubenswrapper[4808]: E0217 16:37:37.165966 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:37:42 crc kubenswrapper[4808]: E0217 16:37:42.149179 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:37:51 crc kubenswrapper[4808]: E0217 16:37:51.151520 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:37:57 crc kubenswrapper[4808]: E0217 16:37:57.164420 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:38:05 crc kubenswrapper[4808]: E0217 16:38:05.148996 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:38:11 crc kubenswrapper[4808]: E0217 16:38:11.150056 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:38:16 crc kubenswrapper[4808]: E0217 16:38:16.149639 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:38:26 crc kubenswrapper[4808]: E0217 16:38:26.147962 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:38:31 crc kubenswrapper[4808]: E0217 16:38:31.149093 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:38:39 crc kubenswrapper[4808]: E0217 16:38:39.149233 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:38:43 crc kubenswrapper[4808]: I0217 16:38:43.835790 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q8vwn"] Feb 17 16:38:43 crc kubenswrapper[4808]: I0217 16:38:43.844263 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:43 crc kubenswrapper[4808]: I0217 16:38:43.866976 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q8vwn"] Feb 17 16:38:43 crc kubenswrapper[4808]: I0217 16:38:43.930964 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-catalog-content\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:43 crc kubenswrapper[4808]: I0217 16:38:43.931460 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-utilities\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:43 crc kubenswrapper[4808]: I0217 16:38:43.931565 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v7rb\" (UniqueName: \"kubernetes.io/projected/5a88dad2-a141-4d84-85d2-e8b97defad8b-kube-api-access-6v7rb\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.033984 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-catalog-content\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.034090 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-utilities\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.034866 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-utilities\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.034882 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-catalog-content\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.035075 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v7rb\" (UniqueName: \"kubernetes.io/projected/5a88dad2-a141-4d84-85d2-e8b97defad8b-kube-api-access-6v7rb\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.065134 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v7rb\" (UniqueName: \"kubernetes.io/projected/5a88dad2-a141-4d84-85d2-e8b97defad8b-kube-api-access-6v7rb\") pod \"redhat-operators-q8vwn\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: E0217 16:38:44.147604 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.202376 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:44 crc kubenswrapper[4808]: I0217 16:38:44.665985 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q8vwn"] Feb 17 16:38:45 crc kubenswrapper[4808]: I0217 16:38:45.058704 4808 generic.go:334] "Generic (PLEG): container finished" podID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerID="7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7" exitCode=0 Feb 17 16:38:45 crc kubenswrapper[4808]: I0217 16:38:45.058885 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8vwn" event={"ID":"5a88dad2-a141-4d84-85d2-e8b97defad8b","Type":"ContainerDied","Data":"7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7"} Feb 17 16:38:45 crc kubenswrapper[4808]: I0217 16:38:45.060040 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8vwn" event={"ID":"5a88dad2-a141-4d84-85d2-e8b97defad8b","Type":"ContainerStarted","Data":"66f492c858449812bc56a11568b4164d08498794c2af45b5127f3bbf69b58322"} Feb 17 16:38:46 crc kubenswrapper[4808]: I0217 16:38:46.075471 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8vwn" event={"ID":"5a88dad2-a141-4d84-85d2-e8b97defad8b","Type":"ContainerStarted","Data":"c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496"} Feb 17 16:38:50 crc kubenswrapper[4808]: I0217 16:38:50.131840 4808 generic.go:334] "Generic (PLEG): container finished" podID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerID="c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496" exitCode=0 Feb 17 16:38:50 crc kubenswrapper[4808]: I0217 16:38:50.131918 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8vwn" event={"ID":"5a88dad2-a141-4d84-85d2-e8b97defad8b","Type":"ContainerDied","Data":"c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496"} Feb 17 16:38:51 crc kubenswrapper[4808]: I0217 16:38:51.142324 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8vwn" event={"ID":"5a88dad2-a141-4d84-85d2-e8b97defad8b","Type":"ContainerStarted","Data":"4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e"} Feb 17 16:38:51 crc kubenswrapper[4808]: I0217 16:38:51.196536 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q8vwn" podStartSLOduration=2.711497022 podStartE2EDuration="8.196509523s" podCreationTimestamp="2026-02-17 16:38:43 +0000 UTC" firstStartedPulling="2026-02-17 16:38:45.060761199 +0000 UTC m=+2688.577120282" lastFinishedPulling="2026-02-17 16:38:50.54577369 +0000 UTC m=+2694.062132783" observedRunningTime="2026-02-17 16:38:51.173361513 +0000 UTC m=+2694.689720586" watchObservedRunningTime="2026-02-17 16:38:51.196509523 +0000 UTC m=+2694.712868606" Feb 17 16:38:53 crc kubenswrapper[4808]: E0217 16:38:53.148607 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:38:54 crc kubenswrapper[4808]: I0217 16:38:54.202956 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:54 crc kubenswrapper[4808]: I0217 16:38:54.203816 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:38:55 crc kubenswrapper[4808]: I0217 16:38:55.265078 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q8vwn" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="registry-server" probeResult="failure" output=< Feb 17 16:38:55 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 16:38:55 crc kubenswrapper[4808]: > Feb 17 16:38:57 crc kubenswrapper[4808]: E0217 16:38:57.154879 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:39:04 crc kubenswrapper[4808]: I0217 16:39:04.297885 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:39:04 crc kubenswrapper[4808]: I0217 16:39:04.378124 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:39:04 crc kubenswrapper[4808]: I0217 16:39:04.554741 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q8vwn"] Feb 17 16:39:06 crc kubenswrapper[4808]: E0217 16:39:06.148081 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:39:06 crc kubenswrapper[4808]: I0217 16:39:06.317350 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q8vwn" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="registry-server" containerID="cri-o://4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e" gracePeriod=2 Feb 17 16:39:06 crc kubenswrapper[4808]: I0217 16:39:06.939028 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.101884 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-catalog-content\") pod \"5a88dad2-a141-4d84-85d2-e8b97defad8b\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.101933 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v7rb\" (UniqueName: \"kubernetes.io/projected/5a88dad2-a141-4d84-85d2-e8b97defad8b-kube-api-access-6v7rb\") pod \"5a88dad2-a141-4d84-85d2-e8b97defad8b\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.101994 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-utilities\") pod \"5a88dad2-a141-4d84-85d2-e8b97defad8b\" (UID: \"5a88dad2-a141-4d84-85d2-e8b97defad8b\") " Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.103125 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-utilities" (OuterVolumeSpecName: "utilities") pod "5a88dad2-a141-4d84-85d2-e8b97defad8b" (UID: "5a88dad2-a141-4d84-85d2-e8b97defad8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.108813 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a88dad2-a141-4d84-85d2-e8b97defad8b-kube-api-access-6v7rb" (OuterVolumeSpecName: "kube-api-access-6v7rb") pod "5a88dad2-a141-4d84-85d2-e8b97defad8b" (UID: "5a88dad2-a141-4d84-85d2-e8b97defad8b"). InnerVolumeSpecName "kube-api-access-6v7rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.203997 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v7rb\" (UniqueName: \"kubernetes.io/projected/5a88dad2-a141-4d84-85d2-e8b97defad8b-kube-api-access-6v7rb\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.204030 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.236033 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a88dad2-a141-4d84-85d2-e8b97defad8b" (UID: "5a88dad2-a141-4d84-85d2-e8b97defad8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.305506 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a88dad2-a141-4d84-85d2-e8b97defad8b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.327263 4808 generic.go:334] "Generic (PLEG): container finished" podID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerID="4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e" exitCode=0 Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.327304 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8vwn" event={"ID":"5a88dad2-a141-4d84-85d2-e8b97defad8b","Type":"ContainerDied","Data":"4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e"} Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.327332 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q8vwn" event={"ID":"5a88dad2-a141-4d84-85d2-e8b97defad8b","Type":"ContainerDied","Data":"66f492c858449812bc56a11568b4164d08498794c2af45b5127f3bbf69b58322"} Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.327334 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q8vwn" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.327348 4808 scope.go:117] "RemoveContainer" containerID="4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.360555 4808 scope.go:117] "RemoveContainer" containerID="c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.372280 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q8vwn"] Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.381729 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q8vwn"] Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.399335 4808 scope.go:117] "RemoveContainer" containerID="7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.448313 4808 scope.go:117] "RemoveContainer" containerID="4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e" Feb 17 16:39:07 crc kubenswrapper[4808]: E0217 16:39:07.448794 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e\": container with ID starting with 4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e not found: ID does not exist" containerID="4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.448823 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e"} err="failed to get container status \"4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e\": rpc error: code = NotFound desc = could not find container \"4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e\": container with ID starting with 4c520ed3361db0a15b556f5ff6eea476901f394716647473fc4a59c837079c9e not found: ID does not exist" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.448845 4808 scope.go:117] "RemoveContainer" containerID="c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496" Feb 17 16:39:07 crc kubenswrapper[4808]: E0217 16:39:07.449248 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496\": container with ID starting with c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496 not found: ID does not exist" containerID="c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.449273 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496"} err="failed to get container status \"c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496\": rpc error: code = NotFound desc = could not find container \"c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496\": container with ID starting with c7de64a0d581120bef717e1286de6cf57bcc353a82fe899ab03fc15cdf65a496 not found: ID does not exist" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.449287 4808 scope.go:117] "RemoveContainer" containerID="7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7" Feb 17 16:39:07 crc kubenswrapper[4808]: E0217 16:39:07.449596 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7\": container with ID starting with 7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7 not found: ID does not exist" containerID="7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7" Feb 17 16:39:07 crc kubenswrapper[4808]: I0217 16:39:07.449640 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7"} err="failed to get container status \"7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7\": rpc error: code = NotFound desc = could not find container \"7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7\": container with ID starting with 7a374d10196aea00cf6516262ebdd7226f9c80c3f45fc5a10080aa5a274591d7 not found: ID does not exist" Feb 17 16:39:09 crc kubenswrapper[4808]: I0217 16:39:09.157910 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" path="/var/lib/kubelet/pods/5a88dad2-a141-4d84-85d2-e8b97defad8b/volumes" Feb 17 16:39:10 crc kubenswrapper[4808]: I0217 16:39:10.964447 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mk25l"] Feb 17 16:39:10 crc kubenswrapper[4808]: E0217 16:39:10.965618 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="extract-content" Feb 17 16:39:10 crc kubenswrapper[4808]: I0217 16:39:10.965649 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="extract-content" Feb 17 16:39:10 crc kubenswrapper[4808]: E0217 16:39:10.965684 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="registry-server" Feb 17 16:39:10 crc kubenswrapper[4808]: I0217 16:39:10.965698 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="registry-server" Feb 17 16:39:10 crc kubenswrapper[4808]: E0217 16:39:10.965734 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="extract-utilities" Feb 17 16:39:10 crc kubenswrapper[4808]: I0217 16:39:10.965749 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="extract-utilities" Feb 17 16:39:10 crc kubenswrapper[4808]: I0217 16:39:10.966218 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a88dad2-a141-4d84-85d2-e8b97defad8b" containerName="registry-server" Feb 17 16:39:10 crc kubenswrapper[4808]: I0217 16:39:10.969075 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:10 crc kubenswrapper[4808]: I0217 16:39:10.988355 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mk25l"] Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.099208 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-utilities\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.099593 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-catalog-content\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.099798 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6xxx\" (UniqueName: \"kubernetes.io/projected/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-kube-api-access-g6xxx\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.202322 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-catalog-content\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.202402 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6xxx\" (UniqueName: \"kubernetes.io/projected/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-kube-api-access-g6xxx\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.202611 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-utilities\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.203221 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-utilities\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.203327 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-catalog-content\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.239451 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6xxx\" (UniqueName: \"kubernetes.io/projected/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-kube-api-access-g6xxx\") pod \"community-operators-mk25l\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.290216 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.868463 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mk25l"] Feb 17 16:39:11 crc kubenswrapper[4808]: W0217 16:39:11.870131 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod257c9d3f_48cc_4f4f_83f8_9474261e2ca4.slice/crio-4104199f86a25c4c9e4fa9c7bdb606ea588c4183c6da3390fa280995babbd394 WatchSource:0}: Error finding container 4104199f86a25c4c9e4fa9c7bdb606ea588c4183c6da3390fa280995babbd394: Status 404 returned error can't find the container with id 4104199f86a25c4c9e4fa9c7bdb606ea588c4183c6da3390fa280995babbd394 Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.983843 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjt6"] Feb 17 16:39:11 crc kubenswrapper[4808]: I0217 16:39:11.990178 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.002159 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjt6"] Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.126592 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mljbk\" (UniqueName: \"kubernetes.io/projected/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-kube-api-access-mljbk\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.126670 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-catalog-content\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.127376 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-utilities\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: E0217 16:39:12.147668 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.229879 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-utilities\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.230105 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mljbk\" (UniqueName: \"kubernetes.io/projected/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-kube-api-access-mljbk\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.230197 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-catalog-content\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.231372 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-utilities\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.231480 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-catalog-content\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.257326 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mljbk\" (UniqueName: \"kubernetes.io/projected/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-kube-api-access-mljbk\") pod \"redhat-marketplace-lzjt6\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.365078 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.384601 4808 generic.go:334] "Generic (PLEG): container finished" podID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerID="45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add" exitCode=0 Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.384655 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk25l" event={"ID":"257c9d3f-48cc-4f4f-83f8-9474261e2ca4","Type":"ContainerDied","Data":"45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add"} Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.384691 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk25l" event={"ID":"257c9d3f-48cc-4f4f-83f8-9474261e2ca4","Type":"ContainerStarted","Data":"4104199f86a25c4c9e4fa9c7bdb606ea588c4183c6da3390fa280995babbd394"} Feb 17 16:39:12 crc kubenswrapper[4808]: I0217 16:39:12.838129 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjt6"] Feb 17 16:39:12 crc kubenswrapper[4808]: W0217 16:39:12.845532 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7f557dd_9578_4e27_afb8_2c090c0b6fe2.slice/crio-e9af397a0a9842006d4b8caff6ddf87f520ee3b3765a58441257646338588cc3 WatchSource:0}: Error finding container e9af397a0a9842006d4b8caff6ddf87f520ee3b3765a58441257646338588cc3: Status 404 returned error can't find the container with id e9af397a0a9842006d4b8caff6ddf87f520ee3b3765a58441257646338588cc3 Feb 17 16:39:13 crc kubenswrapper[4808]: I0217 16:39:13.407997 4808 generic.go:334] "Generic (PLEG): container finished" podID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerID="719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c" exitCode=0 Feb 17 16:39:13 crc kubenswrapper[4808]: I0217 16:39:13.408059 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjt6" event={"ID":"d7f557dd-9578-4e27-afb8-2c090c0b6fe2","Type":"ContainerDied","Data":"719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c"} Feb 17 16:39:13 crc kubenswrapper[4808]: I0217 16:39:13.408095 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjt6" event={"ID":"d7f557dd-9578-4e27-afb8-2c090c0b6fe2","Type":"ContainerStarted","Data":"e9af397a0a9842006d4b8caff6ddf87f520ee3b3765a58441257646338588cc3"} Feb 17 16:39:14 crc kubenswrapper[4808]: I0217 16:39:14.418760 4808 generic.go:334] "Generic (PLEG): container finished" podID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerID="fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a" exitCode=0 Feb 17 16:39:14 crc kubenswrapper[4808]: I0217 16:39:14.418833 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk25l" event={"ID":"257c9d3f-48cc-4f4f-83f8-9474261e2ca4","Type":"ContainerDied","Data":"fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a"} Feb 17 16:39:14 crc kubenswrapper[4808]: I0217 16:39:14.422982 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjt6" event={"ID":"d7f557dd-9578-4e27-afb8-2c090c0b6fe2","Type":"ContainerStarted","Data":"cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9"} Feb 17 16:39:15 crc kubenswrapper[4808]: I0217 16:39:15.435880 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk25l" event={"ID":"257c9d3f-48cc-4f4f-83f8-9474261e2ca4","Type":"ContainerStarted","Data":"6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d"} Feb 17 16:39:15 crc kubenswrapper[4808]: I0217 16:39:15.438934 4808 generic.go:334] "Generic (PLEG): container finished" podID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerID="cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9" exitCode=0 Feb 17 16:39:15 crc kubenswrapper[4808]: I0217 16:39:15.439000 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjt6" event={"ID":"d7f557dd-9578-4e27-afb8-2c090c0b6fe2","Type":"ContainerDied","Data":"cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9"} Feb 17 16:39:15 crc kubenswrapper[4808]: I0217 16:39:15.462433 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mk25l" podStartSLOduration=3.061521446 podStartE2EDuration="5.462411724s" podCreationTimestamp="2026-02-17 16:39:10 +0000 UTC" firstStartedPulling="2026-02-17 16:39:12.395356779 +0000 UTC m=+2715.911715852" lastFinishedPulling="2026-02-17 16:39:14.796247057 +0000 UTC m=+2718.312606130" observedRunningTime="2026-02-17 16:39:15.458460788 +0000 UTC m=+2718.974819881" watchObservedRunningTime="2026-02-17 16:39:15.462411724 +0000 UTC m=+2718.978770797" Feb 17 16:39:16 crc kubenswrapper[4808]: I0217 16:39:16.449016 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjt6" event={"ID":"d7f557dd-9578-4e27-afb8-2c090c0b6fe2","Type":"ContainerStarted","Data":"1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61"} Feb 17 16:39:16 crc kubenswrapper[4808]: I0217 16:39:16.472714 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lzjt6" podStartSLOduration=2.824454738 podStartE2EDuration="5.472697416s" podCreationTimestamp="2026-02-17 16:39:11 +0000 UTC" firstStartedPulling="2026-02-17 16:39:13.410310836 +0000 UTC m=+2716.926669919" lastFinishedPulling="2026-02-17 16:39:16.058553524 +0000 UTC m=+2719.574912597" observedRunningTime="2026-02-17 16:39:16.472522671 +0000 UTC m=+2719.988881754" watchObservedRunningTime="2026-02-17 16:39:16.472697416 +0000 UTC m=+2719.989056489" Feb 17 16:39:19 crc kubenswrapper[4808]: E0217 16:39:19.148726 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:39:21 crc kubenswrapper[4808]: I0217 16:39:21.290517 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:21 crc kubenswrapper[4808]: I0217 16:39:21.290925 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:21 crc kubenswrapper[4808]: I0217 16:39:21.358014 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:21 crc kubenswrapper[4808]: I0217 16:39:21.543260 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:21 crc kubenswrapper[4808]: I0217 16:39:21.593960 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:39:21 crc kubenswrapper[4808]: I0217 16:39:21.594025 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:39:21 crc kubenswrapper[4808]: I0217 16:39:21.627216 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mk25l"] Feb 17 16:39:22 crc kubenswrapper[4808]: I0217 16:39:22.365610 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:22 crc kubenswrapper[4808]: I0217 16:39:22.365656 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:22 crc kubenswrapper[4808]: I0217 16:39:22.431886 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:22 crc kubenswrapper[4808]: I0217 16:39:22.552021 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:23 crc kubenswrapper[4808]: I0217 16:39:23.519346 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mk25l" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="registry-server" containerID="cri-o://6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d" gracePeriod=2 Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.055625 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.216029 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6xxx\" (UniqueName: \"kubernetes.io/projected/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-kube-api-access-g6xxx\") pod \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.216238 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-utilities\") pod \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.216326 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-catalog-content\") pod \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\" (UID: \"257c9d3f-48cc-4f4f-83f8-9474261e2ca4\") " Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.217604 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-utilities" (OuterVolumeSpecName: "utilities") pod "257c9d3f-48cc-4f4f-83f8-9474261e2ca4" (UID: "257c9d3f-48cc-4f4f-83f8-9474261e2ca4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.223540 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-kube-api-access-g6xxx" (OuterVolumeSpecName: "kube-api-access-g6xxx") pod "257c9d3f-48cc-4f4f-83f8-9474261e2ca4" (UID: "257c9d3f-48cc-4f4f-83f8-9474261e2ca4"). InnerVolumeSpecName "kube-api-access-g6xxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.319777 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.319805 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6xxx\" (UniqueName: \"kubernetes.io/projected/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-kube-api-access-g6xxx\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.532675 4808 generic.go:334] "Generic (PLEG): container finished" podID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerID="6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d" exitCode=0 Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.532757 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk25l" event={"ID":"257c9d3f-48cc-4f4f-83f8-9474261e2ca4","Type":"ContainerDied","Data":"6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d"} Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.533026 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mk25l" event={"ID":"257c9d3f-48cc-4f4f-83f8-9474261e2ca4","Type":"ContainerDied","Data":"4104199f86a25c4c9e4fa9c7bdb606ea588c4183c6da3390fa280995babbd394"} Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.532856 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mk25l" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.533055 4808 scope.go:117] "RemoveContainer" containerID="6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.564772 4808 scope.go:117] "RemoveContainer" containerID="fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.579292 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "257c9d3f-48cc-4f4f-83f8-9474261e2ca4" (UID: "257c9d3f-48cc-4f4f-83f8-9474261e2ca4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.594902 4808 scope.go:117] "RemoveContainer" containerID="45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.628338 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257c9d3f-48cc-4f4f-83f8-9474261e2ca4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.645489 4808 scope.go:117] "RemoveContainer" containerID="6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d" Feb 17 16:39:24 crc kubenswrapper[4808]: E0217 16:39:24.646050 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d\": container with ID starting with 6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d not found: ID does not exist" containerID="6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.646119 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d"} err="failed to get container status \"6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d\": rpc error: code = NotFound desc = could not find container \"6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d\": container with ID starting with 6d7d7cb7eaab69b99d177f638a6d9b174c8556f6e5609c7e27f5361458f7dc4d not found: ID does not exist" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.646161 4808 scope.go:117] "RemoveContainer" containerID="fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a" Feb 17 16:39:24 crc kubenswrapper[4808]: E0217 16:39:24.646656 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a\": container with ID starting with fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a not found: ID does not exist" containerID="fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.646892 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a"} err="failed to get container status \"fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a\": rpc error: code = NotFound desc = could not find container \"fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a\": container with ID starting with fc98df33633d8d660711b821b3c95493d06a572131d055bf46e14c3e697ab91a not found: ID does not exist" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.647075 4808 scope.go:117] "RemoveContainer" containerID="45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add" Feb 17 16:39:24 crc kubenswrapper[4808]: E0217 16:39:24.647656 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add\": container with ID starting with 45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add not found: ID does not exist" containerID="45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.647688 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add"} err="failed to get container status \"45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add\": rpc error: code = NotFound desc = could not find container \"45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add\": container with ID starting with 45d999c2987fb418a82509a66be76a15ca8a63bb97febb4600b4d746a45b5add not found: ID does not exist" Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.805181 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjt6"] Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.805409 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lzjt6" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="registry-server" containerID="cri-o://1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61" gracePeriod=2 Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.927533 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mk25l"] Feb 17 16:39:24 crc kubenswrapper[4808]: I0217 16:39:24.938903 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mk25l"] Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.166704 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" path="/var/lib/kubelet/pods/257c9d3f-48cc-4f4f-83f8-9474261e2ca4/volumes" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.403928 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.544127 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-utilities\") pod \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.544284 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mljbk\" (UniqueName: \"kubernetes.io/projected/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-kube-api-access-mljbk\") pod \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.544450 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-catalog-content\") pod \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\" (UID: \"d7f557dd-9578-4e27-afb8-2c090c0b6fe2\") " Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.544986 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-utilities" (OuterVolumeSpecName: "utilities") pod "d7f557dd-9578-4e27-afb8-2c090c0b6fe2" (UID: "d7f557dd-9578-4e27-afb8-2c090c0b6fe2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.545528 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.545988 4808 generic.go:334] "Generic (PLEG): container finished" podID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerID="1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61" exitCode=0 Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.546023 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjt6" event={"ID":"d7f557dd-9578-4e27-afb8-2c090c0b6fe2","Type":"ContainerDied","Data":"1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61"} Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.546086 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzjt6" event={"ID":"d7f557dd-9578-4e27-afb8-2c090c0b6fe2","Type":"ContainerDied","Data":"e9af397a0a9842006d4b8caff6ddf87f520ee3b3765a58441257646338588cc3"} Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.546094 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzjt6" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.546105 4808 scope.go:117] "RemoveContainer" containerID="1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.550041 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-kube-api-access-mljbk" (OuterVolumeSpecName: "kube-api-access-mljbk") pod "d7f557dd-9578-4e27-afb8-2c090c0b6fe2" (UID: "d7f557dd-9578-4e27-afb8-2c090c0b6fe2"). InnerVolumeSpecName "kube-api-access-mljbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.570685 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7f557dd-9578-4e27-afb8-2c090c0b6fe2" (UID: "d7f557dd-9578-4e27-afb8-2c090c0b6fe2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.614356 4808 scope.go:117] "RemoveContainer" containerID="cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.634791 4808 scope.go:117] "RemoveContainer" containerID="719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.647532 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mljbk\" (UniqueName: \"kubernetes.io/projected/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-kube-api-access-mljbk\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.647662 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f557dd-9578-4e27-afb8-2c090c0b6fe2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.685269 4808 scope.go:117] "RemoveContainer" containerID="1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61" Feb 17 16:39:25 crc kubenswrapper[4808]: E0217 16:39:25.685779 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61\": container with ID starting with 1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61 not found: ID does not exist" containerID="1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.685882 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61"} err="failed to get container status \"1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61\": rpc error: code = NotFound desc = could not find container \"1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61\": container with ID starting with 1c4085eacb3bf18589b2c450bdbc7dc3ddd73319619a1dc42b2d86c146d19a61 not found: ID does not exist" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.685973 4808 scope.go:117] "RemoveContainer" containerID="cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9" Feb 17 16:39:25 crc kubenswrapper[4808]: E0217 16:39:25.686298 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9\": container with ID starting with cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9 not found: ID does not exist" containerID="cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.686405 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9"} err="failed to get container status \"cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9\": rpc error: code = NotFound desc = could not find container \"cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9\": container with ID starting with cf53011b691b7e94643610a3ef82c7b30f65211a2e2c6a396fc0cac16515f6b9 not found: ID does not exist" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.686483 4808 scope.go:117] "RemoveContainer" containerID="719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c" Feb 17 16:39:25 crc kubenswrapper[4808]: E0217 16:39:25.687126 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c\": container with ID starting with 719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c not found: ID does not exist" containerID="719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.687190 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c"} err="failed to get container status \"719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c\": rpc error: code = NotFound desc = could not find container \"719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c\": container with ID starting with 719b54c5c03e3dd7ce20745cfb6f18d5bc2c2dcf265ea2bc1faf0af0bbdfa61c not found: ID does not exist" Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.882725 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjt6"] Feb 17 16:39:25 crc kubenswrapper[4808]: I0217 16:39:25.890213 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzjt6"] Feb 17 16:39:26 crc kubenswrapper[4808]: E0217 16:39:26.148403 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:39:27 crc kubenswrapper[4808]: I0217 16:39:27.166251 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" path="/var/lib/kubelet/pods/d7f557dd-9578-4e27-afb8-2c090c0b6fe2/volumes" Feb 17 16:39:34 crc kubenswrapper[4808]: I0217 16:39:34.148331 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:39:35 crc kubenswrapper[4808]: E0217 16:39:35.251669 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:39:35 crc kubenswrapper[4808]: E0217 16:39:35.251742 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:39:35 crc kubenswrapper[4808]: E0217 16:39:35.251900 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:39:35 crc kubenswrapper[4808]: E0217 16:39:35.253150 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:39:40 crc kubenswrapper[4808]: E0217 16:39:40.148963 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:39:46 crc kubenswrapper[4808]: E0217 16:39:46.152248 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:39:51 crc kubenswrapper[4808]: I0217 16:39:51.592465 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:39:51 crc kubenswrapper[4808]: I0217 16:39:51.592965 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:39:53 crc kubenswrapper[4808]: E0217 16:39:53.148119 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:39:59 crc kubenswrapper[4808]: E0217 16:39:59.150432 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:40:08 crc kubenswrapper[4808]: E0217 16:40:08.257919 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:40:08 crc kubenswrapper[4808]: E0217 16:40:08.258666 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:40:08 crc kubenswrapper[4808]: E0217 16:40:08.258841 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:40:08 crc kubenswrapper[4808]: E0217 16:40:08.260097 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:40:10 crc kubenswrapper[4808]: E0217 16:40:10.148890 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:40:21 crc kubenswrapper[4808]: I0217 16:40:21.592258 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:40:21 crc kubenswrapper[4808]: I0217 16:40:21.592891 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:40:21 crc kubenswrapper[4808]: I0217 16:40:21.592940 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:40:21 crc kubenswrapper[4808]: I0217 16:40:21.594065 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e8601a98b232938835916b07f525ce196aee0ee01e8ee4ec9de824633712b8d"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:40:21 crc kubenswrapper[4808]: I0217 16:40:21.594134 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://7e8601a98b232938835916b07f525ce196aee0ee01e8ee4ec9de824633712b8d" gracePeriod=600 Feb 17 16:40:22 crc kubenswrapper[4808]: E0217 16:40:22.148654 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:40:22 crc kubenswrapper[4808]: I0217 16:40:22.170937 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="7e8601a98b232938835916b07f525ce196aee0ee01e8ee4ec9de824633712b8d" exitCode=0 Feb 17 16:40:22 crc kubenswrapper[4808]: I0217 16:40:22.170993 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"7e8601a98b232938835916b07f525ce196aee0ee01e8ee4ec9de824633712b8d"} Feb 17 16:40:22 crc kubenswrapper[4808]: I0217 16:40:22.171047 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143"} Feb 17 16:40:22 crc kubenswrapper[4808]: I0217 16:40:22.171065 4808 scope.go:117] "RemoveContainer" containerID="1bc8c301ec8b4441d9a8329001acd7ade818d27cbaa99f4b04c925c309e2eb22" Feb 17 16:40:23 crc kubenswrapper[4808]: E0217 16:40:23.148659 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:40:35 crc kubenswrapper[4808]: E0217 16:40:35.148589 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:40:35 crc kubenswrapper[4808]: E0217 16:40:35.148905 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:40:46 crc kubenswrapper[4808]: E0217 16:40:46.150095 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:40:47 crc kubenswrapper[4808]: E0217 16:40:47.162934 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:40:57 crc kubenswrapper[4808]: E0217 16:40:57.184435 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:41:02 crc kubenswrapper[4808]: E0217 16:41:02.149312 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:41:08 crc kubenswrapper[4808]: E0217 16:41:08.148105 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:41:14 crc kubenswrapper[4808]: E0217 16:41:14.147822 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:41:23 crc kubenswrapper[4808]: E0217 16:41:23.148263 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:41:26 crc kubenswrapper[4808]: E0217 16:41:26.149159 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:41:37 crc kubenswrapper[4808]: E0217 16:41:37.176549 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:41:38 crc kubenswrapper[4808]: E0217 16:41:38.148366 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:41:48 crc kubenswrapper[4808]: E0217 16:41:48.149430 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:41:50 crc kubenswrapper[4808]: E0217 16:41:50.147779 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:42:03 crc kubenswrapper[4808]: E0217 16:42:03.148271 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:42:05 crc kubenswrapper[4808]: E0217 16:42:05.151911 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:42:16 crc kubenswrapper[4808]: E0217 16:42:16.148977 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:42:20 crc kubenswrapper[4808]: E0217 16:42:20.149599 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:42:21 crc kubenswrapper[4808]: I0217 16:42:21.592358 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:42:21 crc kubenswrapper[4808]: I0217 16:42:21.592461 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:42:28 crc kubenswrapper[4808]: E0217 16:42:28.150497 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:42:32 crc kubenswrapper[4808]: E0217 16:42:32.148262 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:42:41 crc kubenswrapper[4808]: E0217 16:42:41.149866 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:42:46 crc kubenswrapper[4808]: E0217 16:42:46.149091 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:42:51 crc kubenswrapper[4808]: I0217 16:42:51.598671 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:42:51 crc kubenswrapper[4808]: I0217 16:42:51.599392 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:42:52 crc kubenswrapper[4808]: E0217 16:42:52.150140 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:42:52 crc kubenswrapper[4808]: I0217 16:42:52.886618 4808 generic.go:334] "Generic (PLEG): container finished" podID="c51156c6-7d2b-4871-9ae0-963c4eb67454" containerID="65dafe8a1101f4ddfb7e0bce9d223f707cac8bd45bd857f95672b3b349fe2857" exitCode=2 Feb 17 16:42:52 crc kubenswrapper[4808]: I0217 16:42:52.886678 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" event={"ID":"c51156c6-7d2b-4871-9ae0-963c4eb67454","Type":"ContainerDied","Data":"65dafe8a1101f4ddfb7e0bce9d223f707cac8bd45bd857f95672b3b349fe2857"} Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.381248 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.576852 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-inventory\") pod \"c51156c6-7d2b-4871-9ae0-963c4eb67454\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.577627 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf9ss\" (UniqueName: \"kubernetes.io/projected/c51156c6-7d2b-4871-9ae0-963c4eb67454-kube-api-access-nf9ss\") pod \"c51156c6-7d2b-4871-9ae0-963c4eb67454\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.577858 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-ssh-key-openstack-edpm-ipam\") pod \"c51156c6-7d2b-4871-9ae0-963c4eb67454\" (UID: \"c51156c6-7d2b-4871-9ae0-963c4eb67454\") " Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.583407 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c51156c6-7d2b-4871-9ae0-963c4eb67454-kube-api-access-nf9ss" (OuterVolumeSpecName: "kube-api-access-nf9ss") pod "c51156c6-7d2b-4871-9ae0-963c4eb67454" (UID: "c51156c6-7d2b-4871-9ae0-963c4eb67454"). InnerVolumeSpecName "kube-api-access-nf9ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.615642 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c51156c6-7d2b-4871-9ae0-963c4eb67454" (UID: "c51156c6-7d2b-4871-9ae0-963c4eb67454"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.615722 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-inventory" (OuterVolumeSpecName: "inventory") pod "c51156c6-7d2b-4871-9ae0-963c4eb67454" (UID: "c51156c6-7d2b-4871-9ae0-963c4eb67454"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.680267 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.680295 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c51156c6-7d2b-4871-9ae0-963c4eb67454-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.680303 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf9ss\" (UniqueName: \"kubernetes.io/projected/c51156c6-7d2b-4871-9ae0-963c4eb67454-kube-api-access-nf9ss\") on node \"crc\" DevicePath \"\"" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.914331 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" event={"ID":"c51156c6-7d2b-4871-9ae0-963c4eb67454","Type":"ContainerDied","Data":"0bd0464d30a220d6d00def18b5261451af4eeafffd898c8b5ae55cfbfb63623f"} Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.914757 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bd0464d30a220d6d00def18b5261451af4eeafffd898c8b5ae55cfbfb63623f" Feb 17 16:42:54 crc kubenswrapper[4808]: I0217 16:42:54.914900 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8" Feb 17 16:43:00 crc kubenswrapper[4808]: E0217 16:43:00.149837 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:43:06 crc kubenswrapper[4808]: E0217 16:43:06.147296 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:43:15 crc kubenswrapper[4808]: E0217 16:43:15.148026 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:43:18 crc kubenswrapper[4808]: E0217 16:43:18.153433 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:43:21 crc kubenswrapper[4808]: I0217 16:43:21.592092 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:43:21 crc kubenswrapper[4808]: I0217 16:43:21.592478 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:43:21 crc kubenswrapper[4808]: I0217 16:43:21.592538 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:43:21 crc kubenswrapper[4808]: I0217 16:43:21.593641 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:43:21 crc kubenswrapper[4808]: I0217 16:43:21.593735 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" gracePeriod=600 Feb 17 16:43:21 crc kubenswrapper[4808]: E0217 16:43:21.720123 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:43:22 crc kubenswrapper[4808]: I0217 16:43:22.191708 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" exitCode=0 Feb 17 16:43:22 crc kubenswrapper[4808]: I0217 16:43:22.191749 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143"} Feb 17 16:43:22 crc kubenswrapper[4808]: I0217 16:43:22.191795 4808 scope.go:117] "RemoveContainer" containerID="7e8601a98b232938835916b07f525ce196aee0ee01e8ee4ec9de824633712b8d" Feb 17 16:43:22 crc kubenswrapper[4808]: I0217 16:43:22.192877 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:43:22 crc kubenswrapper[4808]: E0217 16:43:22.193418 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:43:28 crc kubenswrapper[4808]: E0217 16:43:28.148858 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:43:30 crc kubenswrapper[4808]: E0217 16:43:30.148818 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.035727 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv"] Feb 17 16:43:32 crc kubenswrapper[4808]: E0217 16:43:32.036610 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="extract-utilities" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.036628 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="extract-utilities" Feb 17 16:43:32 crc kubenswrapper[4808]: E0217 16:43:32.036653 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="registry-server" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.036662 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="registry-server" Feb 17 16:43:32 crc kubenswrapper[4808]: E0217 16:43:32.036678 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="registry-server" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.036687 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="registry-server" Feb 17 16:43:32 crc kubenswrapper[4808]: E0217 16:43:32.036704 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="extract-content" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.036712 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="extract-content" Feb 17 16:43:32 crc kubenswrapper[4808]: E0217 16:43:32.036737 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="extract-content" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.036745 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="extract-content" Feb 17 16:43:32 crc kubenswrapper[4808]: E0217 16:43:32.036770 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c51156c6-7d2b-4871-9ae0-963c4eb67454" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.036779 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c51156c6-7d2b-4871-9ae0-963c4eb67454" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:43:32 crc kubenswrapper[4808]: E0217 16:43:32.036801 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="extract-utilities" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.036809 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="extract-utilities" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.037039 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c51156c6-7d2b-4871-9ae0-963c4eb67454" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.037062 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="257c9d3f-48cc-4f4f-83f8-9474261e2ca4" containerName="registry-server" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.037081 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f557dd-9578-4e27-afb8-2c090c0b6fe2" containerName="registry-server" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.038139 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.041650 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.042035 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.042306 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.042922 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.066403 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv"] Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.236969 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.237242 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bjw8\" (UniqueName: \"kubernetes.io/projected/d178dfcd-66d8-40ba-b740-909fe6e081ac-kube-api-access-9bjw8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.237339 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.339222 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.339296 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bjw8\" (UniqueName: \"kubernetes.io/projected/d178dfcd-66d8-40ba-b740-909fe6e081ac-kube-api-access-9bjw8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.339330 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.358800 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.359187 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.376546 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bjw8\" (UniqueName: \"kubernetes.io/projected/d178dfcd-66d8-40ba-b740-909fe6e081ac-kube-api-access-9bjw8\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:32 crc kubenswrapper[4808]: I0217 16:43:32.663966 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:43:33 crc kubenswrapper[4808]: I0217 16:43:33.215444 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv"] Feb 17 16:43:33 crc kubenswrapper[4808]: I0217 16:43:33.331545 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" event={"ID":"d178dfcd-66d8-40ba-b740-909fe6e081ac","Type":"ContainerStarted","Data":"beadab6c3a4b086c709ebcfa9079469f2ee23c30727b884ea9d18a17c5d65df6"} Feb 17 16:43:34 crc kubenswrapper[4808]: I0217 16:43:34.368414 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" event={"ID":"d178dfcd-66d8-40ba-b740-909fe6e081ac","Type":"ContainerStarted","Data":"29d16363f6fa98f265f09c289debfecc64d954c62ee36d69f30d4932fce9caae"} Feb 17 16:43:34 crc kubenswrapper[4808]: I0217 16:43:34.407677 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" podStartSLOduration=1.950833644 podStartE2EDuration="2.407659793s" podCreationTimestamp="2026-02-17 16:43:32 +0000 UTC" firstStartedPulling="2026-02-17 16:43:33.227560202 +0000 UTC m=+2976.743919275" lastFinishedPulling="2026-02-17 16:43:33.684386311 +0000 UTC m=+2977.200745424" observedRunningTime="2026-02-17 16:43:34.390801608 +0000 UTC m=+2977.907160761" watchObservedRunningTime="2026-02-17 16:43:34.407659793 +0000 UTC m=+2977.924018856" Feb 17 16:43:35 crc kubenswrapper[4808]: I0217 16:43:35.146405 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:43:35 crc kubenswrapper[4808]: E0217 16:43:35.147272 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:43:41 crc kubenswrapper[4808]: E0217 16:43:41.149237 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:43:45 crc kubenswrapper[4808]: E0217 16:43:45.148711 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:43:47 crc kubenswrapper[4808]: I0217 16:43:47.154407 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:43:47 crc kubenswrapper[4808]: E0217 16:43:47.155029 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:43:53 crc kubenswrapper[4808]: E0217 16:43:53.151507 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:43:56 crc kubenswrapper[4808]: E0217 16:43:56.148489 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:43:58 crc kubenswrapper[4808]: I0217 16:43:58.146296 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:43:58 crc kubenswrapper[4808]: E0217 16:43:58.147170 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:44:07 crc kubenswrapper[4808]: E0217 16:44:07.153967 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:44:08 crc kubenswrapper[4808]: E0217 16:44:08.148268 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:44:09 crc kubenswrapper[4808]: I0217 16:44:09.146416 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:44:09 crc kubenswrapper[4808]: E0217 16:44:09.147768 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:44:19 crc kubenswrapper[4808]: E0217 16:44:19.147945 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:44:21 crc kubenswrapper[4808]: E0217 16:44:21.146747 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:44:24 crc kubenswrapper[4808]: I0217 16:44:24.147044 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:44:24 crc kubenswrapper[4808]: E0217 16:44:24.149672 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:44:31 crc kubenswrapper[4808]: E0217 16:44:31.162876 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:44:36 crc kubenswrapper[4808]: E0217 16:44:36.148593 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:44:38 crc kubenswrapper[4808]: I0217 16:44:38.146959 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:44:38 crc kubenswrapper[4808]: E0217 16:44:38.147994 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:44:42 crc kubenswrapper[4808]: I0217 16:44:42.149621 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:44:42 crc kubenswrapper[4808]: E0217 16:44:42.274565 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:44:42 crc kubenswrapper[4808]: E0217 16:44:42.274648 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:44:42 crc kubenswrapper[4808]: E0217 16:44:42.274810 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:44:42 crc kubenswrapper[4808]: E0217 16:44:42.276202 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:44:47 crc kubenswrapper[4808]: E0217 16:44:47.154121 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:44:52 crc kubenswrapper[4808]: I0217 16:44:52.146441 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:44:52 crc kubenswrapper[4808]: E0217 16:44:52.147248 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:44:55 crc kubenswrapper[4808]: E0217 16:44:55.148096 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.161914 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld"] Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.164217 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.166326 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.166674 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.175383 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld"] Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.216878 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jskgc\" (UniqueName: \"kubernetes.io/projected/450a44d1-3fb2-41f5-9200-59c6c1838c86-kube-api-access-jskgc\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.217262 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450a44d1-3fb2-41f5-9200-59c6c1838c86-config-volume\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.217390 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450a44d1-3fb2-41f5-9200-59c6c1838c86-secret-volume\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.319232 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jskgc\" (UniqueName: \"kubernetes.io/projected/450a44d1-3fb2-41f5-9200-59c6c1838c86-kube-api-access-jskgc\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.319296 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450a44d1-3fb2-41f5-9200-59c6c1838c86-config-volume\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.319389 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450a44d1-3fb2-41f5-9200-59c6c1838c86-secret-volume\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.320533 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450a44d1-3fb2-41f5-9200-59c6c1838c86-config-volume\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.326187 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450a44d1-3fb2-41f5-9200-59c6c1838c86-secret-volume\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.338476 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jskgc\" (UniqueName: \"kubernetes.io/projected/450a44d1-3fb2-41f5-9200-59c6c1838c86-kube-api-access-jskgc\") pod \"collect-profiles-29522445-ttsld\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:00 crc kubenswrapper[4808]: I0217 16:45:00.498144 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:01 crc kubenswrapper[4808]: I0217 16:45:01.013127 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld"] Feb 17 16:45:01 crc kubenswrapper[4808]: W0217 16:45:01.023067 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod450a44d1_3fb2_41f5_9200_59c6c1838c86.slice/crio-f78e333a85660ba0ab90b842f06bdef2cc11d93ba9f91c2311c87b04bcae1a10 WatchSource:0}: Error finding container f78e333a85660ba0ab90b842f06bdef2cc11d93ba9f91c2311c87b04bcae1a10: Status 404 returned error can't find the container with id f78e333a85660ba0ab90b842f06bdef2cc11d93ba9f91c2311c87b04bcae1a10 Feb 17 16:45:01 crc kubenswrapper[4808]: I0217 16:45:01.412806 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" event={"ID":"450a44d1-3fb2-41f5-9200-59c6c1838c86","Type":"ContainerStarted","Data":"51178eccc89b955640453b414bcd16d1523ac289cf0ed8497a9b4ca6a3ebaa2d"} Feb 17 16:45:01 crc kubenswrapper[4808]: I0217 16:45:01.413090 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" event={"ID":"450a44d1-3fb2-41f5-9200-59c6c1838c86","Type":"ContainerStarted","Data":"f78e333a85660ba0ab90b842f06bdef2cc11d93ba9f91c2311c87b04bcae1a10"} Feb 17 16:45:02 crc kubenswrapper[4808]: E0217 16:45:02.149808 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.424308 4808 generic.go:334] "Generic (PLEG): container finished" podID="450a44d1-3fb2-41f5-9200-59c6c1838c86" containerID="51178eccc89b955640453b414bcd16d1523ac289cf0ed8497a9b4ca6a3ebaa2d" exitCode=0 Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.424365 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" event={"ID":"450a44d1-3fb2-41f5-9200-59c6c1838c86","Type":"ContainerDied","Data":"51178eccc89b955640453b414bcd16d1523ac289cf0ed8497a9b4ca6a3ebaa2d"} Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.921057 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.980186 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450a44d1-3fb2-41f5-9200-59c6c1838c86-config-volume\") pod \"450a44d1-3fb2-41f5-9200-59c6c1838c86\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.980287 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jskgc\" (UniqueName: \"kubernetes.io/projected/450a44d1-3fb2-41f5-9200-59c6c1838c86-kube-api-access-jskgc\") pod \"450a44d1-3fb2-41f5-9200-59c6c1838c86\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.980372 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450a44d1-3fb2-41f5-9200-59c6c1838c86-secret-volume\") pod \"450a44d1-3fb2-41f5-9200-59c6c1838c86\" (UID: \"450a44d1-3fb2-41f5-9200-59c6c1838c86\") " Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.981313 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/450a44d1-3fb2-41f5-9200-59c6c1838c86-config-volume" (OuterVolumeSpecName: "config-volume") pod "450a44d1-3fb2-41f5-9200-59c6c1838c86" (UID: "450a44d1-3fb2-41f5-9200-59c6c1838c86"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.985372 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/450a44d1-3fb2-41f5-9200-59c6c1838c86-kube-api-access-jskgc" (OuterVolumeSpecName: "kube-api-access-jskgc") pod "450a44d1-3fb2-41f5-9200-59c6c1838c86" (UID: "450a44d1-3fb2-41f5-9200-59c6c1838c86"). InnerVolumeSpecName "kube-api-access-jskgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:45:02 crc kubenswrapper[4808]: I0217 16:45:02.991339 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450a44d1-3fb2-41f5-9200-59c6c1838c86-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "450a44d1-3fb2-41f5-9200-59c6c1838c86" (UID: "450a44d1-3fb2-41f5-9200-59c6c1838c86"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:45:03 crc kubenswrapper[4808]: I0217 16:45:03.083223 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/450a44d1-3fb2-41f5-9200-59c6c1838c86-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:03 crc kubenswrapper[4808]: I0217 16:45:03.083274 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jskgc\" (UniqueName: \"kubernetes.io/projected/450a44d1-3fb2-41f5-9200-59c6c1838c86-kube-api-access-jskgc\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:03 crc kubenswrapper[4808]: I0217 16:45:03.083290 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/450a44d1-3fb2-41f5-9200-59c6c1838c86-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 16:45:03 crc kubenswrapper[4808]: I0217 16:45:03.433980 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" event={"ID":"450a44d1-3fb2-41f5-9200-59c6c1838c86","Type":"ContainerDied","Data":"f78e333a85660ba0ab90b842f06bdef2cc11d93ba9f91c2311c87b04bcae1a10"} Feb 17 16:45:03 crc kubenswrapper[4808]: I0217 16:45:03.434012 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld" Feb 17 16:45:03 crc kubenswrapper[4808]: I0217 16:45:03.434016 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f78e333a85660ba0ab90b842f06bdef2cc11d93ba9f91c2311c87b04bcae1a10" Feb 17 16:45:04 crc kubenswrapper[4808]: I0217 16:45:04.013410 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq"] Feb 17 16:45:04 crc kubenswrapper[4808]: I0217 16:45:04.023863 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522400-gqxpq"] Feb 17 16:45:05 crc kubenswrapper[4808]: I0217 16:45:05.182897 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d231c3b2-ee81-488d-b526-77ab9c8a2822" path="/var/lib/kubelet/pods/d231c3b2-ee81-488d-b526-77ab9c8a2822/volumes" Feb 17 16:45:07 crc kubenswrapper[4808]: I0217 16:45:07.161395 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:45:07 crc kubenswrapper[4808]: E0217 16:45:07.162780 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:45:07 crc kubenswrapper[4808]: E0217 16:45:07.163297 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:45:16 crc kubenswrapper[4808]: E0217 16:45:16.271114 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:45:16 crc kubenswrapper[4808]: E0217 16:45:16.271693 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:45:16 crc kubenswrapper[4808]: E0217 16:45:16.271858 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:45:16 crc kubenswrapper[4808]: E0217 16:45:16.273214 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:45:18 crc kubenswrapper[4808]: I0217 16:45:18.156064 4808 scope.go:117] "RemoveContainer" containerID="a5c43165b9e051b89a89100aebbe7b3cc4c01775c317fec65c06ca231b1fc493" Feb 17 16:45:19 crc kubenswrapper[4808]: E0217 16:45:19.147666 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:45:21 crc kubenswrapper[4808]: I0217 16:45:21.146194 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:45:21 crc kubenswrapper[4808]: E0217 16:45:21.146805 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:45:31 crc kubenswrapper[4808]: E0217 16:45:31.150211 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:45:32 crc kubenswrapper[4808]: I0217 16:45:32.145853 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:45:32 crc kubenswrapper[4808]: E0217 16:45:32.146138 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:45:33 crc kubenswrapper[4808]: E0217 16:45:33.150554 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:45:44 crc kubenswrapper[4808]: E0217 16:45:44.149205 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:45:46 crc kubenswrapper[4808]: I0217 16:45:46.146913 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:45:46 crc kubenswrapper[4808]: E0217 16:45:46.148482 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:45:46 crc kubenswrapper[4808]: E0217 16:45:46.149795 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:45:57 crc kubenswrapper[4808]: E0217 16:45:57.160807 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:45:58 crc kubenswrapper[4808]: E0217 16:45:58.148499 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:46:01 crc kubenswrapper[4808]: I0217 16:46:01.146229 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:46:01 crc kubenswrapper[4808]: E0217 16:46:01.147175 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:46:09 crc kubenswrapper[4808]: E0217 16:46:09.150005 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:46:13 crc kubenswrapper[4808]: E0217 16:46:13.149037 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:46:14 crc kubenswrapper[4808]: I0217 16:46:14.146353 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:46:14 crc kubenswrapper[4808]: E0217 16:46:14.147079 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:46:24 crc kubenswrapper[4808]: E0217 16:46:24.149784 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:46:24 crc kubenswrapper[4808]: E0217 16:46:24.149851 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:46:26 crc kubenswrapper[4808]: I0217 16:46:26.146954 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:46:26 crc kubenswrapper[4808]: E0217 16:46:26.147779 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:46:36 crc kubenswrapper[4808]: E0217 16:46:36.146926 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:46:37 crc kubenswrapper[4808]: E0217 16:46:37.152680 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:46:39 crc kubenswrapper[4808]: I0217 16:46:39.146271 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:46:39 crc kubenswrapper[4808]: E0217 16:46:39.147125 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:46:49 crc kubenswrapper[4808]: E0217 16:46:49.149757 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:46:51 crc kubenswrapper[4808]: I0217 16:46:51.146489 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:46:51 crc kubenswrapper[4808]: E0217 16:46:51.147109 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:46:52 crc kubenswrapper[4808]: E0217 16:46:52.148979 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:47:02 crc kubenswrapper[4808]: E0217 16:47:02.150860 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:47:04 crc kubenswrapper[4808]: E0217 16:47:04.147859 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:47:05 crc kubenswrapper[4808]: I0217 16:47:05.146765 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:47:05 crc kubenswrapper[4808]: E0217 16:47:05.147652 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:47:14 crc kubenswrapper[4808]: E0217 16:47:14.148635 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:47:15 crc kubenswrapper[4808]: E0217 16:47:15.147436 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:47:20 crc kubenswrapper[4808]: I0217 16:47:20.146450 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:47:20 crc kubenswrapper[4808]: E0217 16:47:20.147070 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:47:25 crc kubenswrapper[4808]: E0217 16:47:25.149411 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:47:29 crc kubenswrapper[4808]: E0217 16:47:29.148226 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.771295 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ptxmb"] Feb 17 16:47:33 crc kubenswrapper[4808]: E0217 16:47:33.772320 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450a44d1-3fb2-41f5-9200-59c6c1838c86" containerName="collect-profiles" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.772338 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="450a44d1-3fb2-41f5-9200-59c6c1838c86" containerName="collect-profiles" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.772544 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="450a44d1-3fb2-41f5-9200-59c6c1838c86" containerName="collect-profiles" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.774295 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.787486 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ptxmb"] Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.836871 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-utilities\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.837167 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-catalog-content\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.837299 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6snv\" (UniqueName: \"kubernetes.io/projected/7b9d4467-638d-493d-8574-8499f17c5670-kube-api-access-k6snv\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.939729 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6snv\" (UniqueName: \"kubernetes.io/projected/7b9d4467-638d-493d-8574-8499f17c5670-kube-api-access-k6snv\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.940138 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-utilities\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.940338 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-catalog-content\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.940646 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-utilities\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.940814 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-catalog-content\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:33 crc kubenswrapper[4808]: I0217 16:47:33.959922 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6snv\" (UniqueName: \"kubernetes.io/projected/7b9d4467-638d-493d-8574-8499f17c5670-kube-api-access-k6snv\") pod \"certified-operators-ptxmb\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:34 crc kubenswrapper[4808]: I0217 16:47:34.106729 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:34 crc kubenswrapper[4808]: I0217 16:47:34.666433 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ptxmb"] Feb 17 16:47:35 crc kubenswrapper[4808]: I0217 16:47:35.081817 4808 generic.go:334] "Generic (PLEG): container finished" podID="7b9d4467-638d-493d-8574-8499f17c5670" containerID="2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5" exitCode=0 Feb 17 16:47:35 crc kubenswrapper[4808]: I0217 16:47:35.081884 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxmb" event={"ID":"7b9d4467-638d-493d-8574-8499f17c5670","Type":"ContainerDied","Data":"2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5"} Feb 17 16:47:35 crc kubenswrapper[4808]: I0217 16:47:35.082132 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxmb" event={"ID":"7b9d4467-638d-493d-8574-8499f17c5670","Type":"ContainerStarted","Data":"8fafe0d538171128d4325a574285c5aef22785e8fd1300457f0668def81f80ee"} Feb 17 16:47:35 crc kubenswrapper[4808]: I0217 16:47:35.146487 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:47:35 crc kubenswrapper[4808]: E0217 16:47:35.147553 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:47:36 crc kubenswrapper[4808]: I0217 16:47:36.097432 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxmb" event={"ID":"7b9d4467-638d-493d-8574-8499f17c5670","Type":"ContainerStarted","Data":"7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b"} Feb 17 16:47:38 crc kubenswrapper[4808]: I0217 16:47:38.121310 4808 generic.go:334] "Generic (PLEG): container finished" podID="7b9d4467-638d-493d-8574-8499f17c5670" containerID="7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b" exitCode=0 Feb 17 16:47:38 crc kubenswrapper[4808]: I0217 16:47:38.121362 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxmb" event={"ID":"7b9d4467-638d-493d-8574-8499f17c5670","Type":"ContainerDied","Data":"7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b"} Feb 17 16:47:39 crc kubenswrapper[4808]: I0217 16:47:39.134133 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxmb" event={"ID":"7b9d4467-638d-493d-8574-8499f17c5670","Type":"ContainerStarted","Data":"40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9"} Feb 17 16:47:39 crc kubenswrapper[4808]: E0217 16:47:39.147982 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:47:44 crc kubenswrapper[4808]: I0217 16:47:44.107401 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:44 crc kubenswrapper[4808]: I0217 16:47:44.107967 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:44 crc kubenswrapper[4808]: E0217 16:47:44.149821 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:47:44 crc kubenswrapper[4808]: I0217 16:47:44.156742 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:44 crc kubenswrapper[4808]: I0217 16:47:44.195432 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ptxmb" podStartSLOduration=7.7533514100000005 podStartE2EDuration="11.195409097s" podCreationTimestamp="2026-02-17 16:47:33 +0000 UTC" firstStartedPulling="2026-02-17 16:47:35.083809114 +0000 UTC m=+3218.600168187" lastFinishedPulling="2026-02-17 16:47:38.525866801 +0000 UTC m=+3222.042225874" observedRunningTime="2026-02-17 16:47:39.15360748 +0000 UTC m=+3222.669966563" watchObservedRunningTime="2026-02-17 16:47:44.195409097 +0000 UTC m=+3227.711768170" Feb 17 16:47:44 crc kubenswrapper[4808]: I0217 16:47:44.235216 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:44 crc kubenswrapper[4808]: I0217 16:47:44.407707 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ptxmb"] Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.146632 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:47:46 crc kubenswrapper[4808]: E0217 16:47:46.147246 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.208179 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ptxmb" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="registry-server" containerID="cri-o://40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9" gracePeriod=2 Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.693637 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.815839 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6snv\" (UniqueName: \"kubernetes.io/projected/7b9d4467-638d-493d-8574-8499f17c5670-kube-api-access-k6snv\") pod \"7b9d4467-638d-493d-8574-8499f17c5670\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.816061 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-catalog-content\") pod \"7b9d4467-638d-493d-8574-8499f17c5670\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.816148 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-utilities\") pod \"7b9d4467-638d-493d-8574-8499f17c5670\" (UID: \"7b9d4467-638d-493d-8574-8499f17c5670\") " Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.817240 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-utilities" (OuterVolumeSpecName: "utilities") pod "7b9d4467-638d-493d-8574-8499f17c5670" (UID: "7b9d4467-638d-493d-8574-8499f17c5670"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.821674 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b9d4467-638d-493d-8574-8499f17c5670-kube-api-access-k6snv" (OuterVolumeSpecName: "kube-api-access-k6snv") pod "7b9d4467-638d-493d-8574-8499f17c5670" (UID: "7b9d4467-638d-493d-8574-8499f17c5670"). InnerVolumeSpecName "kube-api-access-k6snv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.888283 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b9d4467-638d-493d-8574-8499f17c5670" (UID: "7b9d4467-638d-493d-8574-8499f17c5670"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.919254 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.919301 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6snv\" (UniqueName: \"kubernetes.io/projected/7b9d4467-638d-493d-8574-8499f17c5670-kube-api-access-k6snv\") on node \"crc\" DevicePath \"\"" Feb 17 16:47:46 crc kubenswrapper[4808]: I0217 16:47:46.919318 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9d4467-638d-493d-8574-8499f17c5670-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.218780 4808 generic.go:334] "Generic (PLEG): container finished" podID="7b9d4467-638d-493d-8574-8499f17c5670" containerID="40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9" exitCode=0 Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.218832 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ptxmb" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.218833 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxmb" event={"ID":"7b9d4467-638d-493d-8574-8499f17c5670","Type":"ContainerDied","Data":"40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9"} Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.219025 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ptxmb" event={"ID":"7b9d4467-638d-493d-8574-8499f17c5670","Type":"ContainerDied","Data":"8fafe0d538171128d4325a574285c5aef22785e8fd1300457f0668def81f80ee"} Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.219068 4808 scope.go:117] "RemoveContainer" containerID="40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.243293 4808 scope.go:117] "RemoveContainer" containerID="7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.250262 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ptxmb"] Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.258843 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ptxmb"] Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.267378 4808 scope.go:117] "RemoveContainer" containerID="2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.316978 4808 scope.go:117] "RemoveContainer" containerID="40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9" Feb 17 16:47:47 crc kubenswrapper[4808]: E0217 16:47:47.317662 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9\": container with ID starting with 40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9 not found: ID does not exist" containerID="40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.317711 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9"} err="failed to get container status \"40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9\": rpc error: code = NotFound desc = could not find container \"40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9\": container with ID starting with 40c732f138e421113ed1646234f6a69eabfa71612439a8e04b012186c72a86b9 not found: ID does not exist" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.317746 4808 scope.go:117] "RemoveContainer" containerID="7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b" Feb 17 16:47:47 crc kubenswrapper[4808]: E0217 16:47:47.318161 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b\": container with ID starting with 7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b not found: ID does not exist" containerID="7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.318195 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b"} err="failed to get container status \"7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b\": rpc error: code = NotFound desc = could not find container \"7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b\": container with ID starting with 7daf403bbf6561e6314c6056d8bb742d0d4a00320ef03dada6c40d1cbea42a8b not found: ID does not exist" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.318215 4808 scope.go:117] "RemoveContainer" containerID="2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5" Feb 17 16:47:47 crc kubenswrapper[4808]: E0217 16:47:47.318609 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5\": container with ID starting with 2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5 not found: ID does not exist" containerID="2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5" Feb 17 16:47:47 crc kubenswrapper[4808]: I0217 16:47:47.318640 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5"} err="failed to get container status \"2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5\": rpc error: code = NotFound desc = could not find container \"2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5\": container with ID starting with 2aadb63da0ff36488275b133e78d3349cd437753a033489a12901fde5be0ceb5 not found: ID does not exist" Feb 17 16:47:49 crc kubenswrapper[4808]: I0217 16:47:49.171213 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b9d4467-638d-493d-8574-8499f17c5670" path="/var/lib/kubelet/pods/7b9d4467-638d-493d-8574-8499f17c5670/volumes" Feb 17 16:47:51 crc kubenswrapper[4808]: E0217 16:47:51.149220 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:47:55 crc kubenswrapper[4808]: E0217 16:47:55.148013 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:48:01 crc kubenswrapper[4808]: I0217 16:48:01.146408 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:48:01 crc kubenswrapper[4808]: E0217 16:48:01.149397 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:48:03 crc kubenswrapper[4808]: E0217 16:48:03.148395 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:48:09 crc kubenswrapper[4808]: E0217 16:48:09.147360 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:48:15 crc kubenswrapper[4808]: I0217 16:48:15.174290 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:48:15 crc kubenswrapper[4808]: E0217 16:48:15.175117 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:48:15 crc kubenswrapper[4808]: E0217 16:48:15.178832 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:48:20 crc kubenswrapper[4808]: E0217 16:48:20.147433 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:48:26 crc kubenswrapper[4808]: E0217 16:48:26.148617 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:48:28 crc kubenswrapper[4808]: I0217 16:48:28.146454 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:48:28 crc kubenswrapper[4808]: I0217 16:48:28.669907 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"2a8ba27f36ba0ee53790b7b2ad1919c83731b5c9274456151ce2d8a4df4fea50"} Feb 17 16:48:31 crc kubenswrapper[4808]: E0217 16:48:31.150399 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:48:38 crc kubenswrapper[4808]: E0217 16:48:38.149336 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:48:44 crc kubenswrapper[4808]: E0217 16:48:44.148909 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:48:53 crc kubenswrapper[4808]: E0217 16:48:53.151523 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:48:59 crc kubenswrapper[4808]: E0217 16:48:59.148226 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:49:05 crc kubenswrapper[4808]: E0217 16:49:05.148458 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.501106 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p7gsg"] Feb 17 16:49:07 crc kubenswrapper[4808]: E0217 16:49:07.502042 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="registry-server" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.502065 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="registry-server" Feb 17 16:49:07 crc kubenswrapper[4808]: E0217 16:49:07.502112 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="extract-utilities" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.502125 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="extract-utilities" Feb 17 16:49:07 crc kubenswrapper[4808]: E0217 16:49:07.502171 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="extract-content" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.502183 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="extract-content" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.502556 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b9d4467-638d-493d-8574-8499f17c5670" containerName="registry-server" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.505108 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.529692 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7gsg"] Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.675928 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-catalog-content\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.676029 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-utilities\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.676381 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkslj\" (UniqueName: \"kubernetes.io/projected/78fee2d5-85c6-48be-bc7f-bcdcb0720230-kube-api-access-tkslj\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.778420 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkslj\" (UniqueName: \"kubernetes.io/projected/78fee2d5-85c6-48be-bc7f-bcdcb0720230-kube-api-access-tkslj\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.778516 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-catalog-content\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.778560 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-utilities\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.779055 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-utilities\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.779098 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-catalog-content\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.800978 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkslj\" (UniqueName: \"kubernetes.io/projected/78fee2d5-85c6-48be-bc7f-bcdcb0720230-kube-api-access-tkslj\") pod \"redhat-operators-p7gsg\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:07 crc kubenswrapper[4808]: I0217 16:49:07.836337 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:08 crc kubenswrapper[4808]: I0217 16:49:08.310495 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p7gsg"] Feb 17 16:49:09 crc kubenswrapper[4808]: I0217 16:49:09.154866 4808 generic.go:334] "Generic (PLEG): container finished" podID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerID="a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1" exitCode=0 Feb 17 16:49:09 crc kubenswrapper[4808]: I0217 16:49:09.156681 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gsg" event={"ID":"78fee2d5-85c6-48be-bc7f-bcdcb0720230","Type":"ContainerDied","Data":"a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1"} Feb 17 16:49:09 crc kubenswrapper[4808]: I0217 16:49:09.156742 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gsg" event={"ID":"78fee2d5-85c6-48be-bc7f-bcdcb0720230","Type":"ContainerStarted","Data":"c7bdfc2fd5f40c6a9fd9e74ee22160de04cc32cff6460663c59ebee846db84e6"} Feb 17 16:49:10 crc kubenswrapper[4808]: E0217 16:49:10.147731 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:49:10 crc kubenswrapper[4808]: I0217 16:49:10.171290 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gsg" event={"ID":"78fee2d5-85c6-48be-bc7f-bcdcb0720230","Type":"ContainerStarted","Data":"a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2"} Feb 17 16:49:13 crc kubenswrapper[4808]: I0217 16:49:13.200332 4808 generic.go:334] "Generic (PLEG): container finished" podID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerID="a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2" exitCode=0 Feb 17 16:49:13 crc kubenswrapper[4808]: I0217 16:49:13.200439 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gsg" event={"ID":"78fee2d5-85c6-48be-bc7f-bcdcb0720230","Type":"ContainerDied","Data":"a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2"} Feb 17 16:49:14 crc kubenswrapper[4808]: I0217 16:49:14.212800 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gsg" event={"ID":"78fee2d5-85c6-48be-bc7f-bcdcb0720230","Type":"ContainerStarted","Data":"3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a"} Feb 17 16:49:14 crc kubenswrapper[4808]: I0217 16:49:14.239322 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p7gsg" podStartSLOduration=2.765029117 podStartE2EDuration="7.239305336s" podCreationTimestamp="2026-02-17 16:49:07 +0000 UTC" firstStartedPulling="2026-02-17 16:49:09.156484575 +0000 UTC m=+3312.672843638" lastFinishedPulling="2026-02-17 16:49:13.630760754 +0000 UTC m=+3317.147119857" observedRunningTime="2026-02-17 16:49:14.231012733 +0000 UTC m=+3317.747371806" watchObservedRunningTime="2026-02-17 16:49:14.239305336 +0000 UTC m=+3317.755664409" Feb 17 16:49:17 crc kubenswrapper[4808]: I0217 16:49:17.837133 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:17 crc kubenswrapper[4808]: I0217 16:49:17.837792 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:18 crc kubenswrapper[4808]: I0217 16:49:18.884088 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p7gsg" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="registry-server" probeResult="failure" output=< Feb 17 16:49:18 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 16:49:18 crc kubenswrapper[4808]: > Feb 17 16:49:20 crc kubenswrapper[4808]: E0217 16:49:20.148331 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:49:23 crc kubenswrapper[4808]: E0217 16:49:23.146773 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:49:27 crc kubenswrapper[4808]: I0217 16:49:27.905401 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:27 crc kubenswrapper[4808]: I0217 16:49:27.972735 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:28 crc kubenswrapper[4808]: I0217 16:49:28.151346 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7gsg"] Feb 17 16:49:29 crc kubenswrapper[4808]: I0217 16:49:29.359197 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p7gsg" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="registry-server" containerID="cri-o://3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a" gracePeriod=2 Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.015819 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.116476 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkslj\" (UniqueName: \"kubernetes.io/projected/78fee2d5-85c6-48be-bc7f-bcdcb0720230-kube-api-access-tkslj\") pod \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.116991 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-utilities\") pod \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.117205 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-catalog-content\") pod \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\" (UID: \"78fee2d5-85c6-48be-bc7f-bcdcb0720230\") " Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.117674 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-utilities" (OuterVolumeSpecName: "utilities") pod "78fee2d5-85c6-48be-bc7f-bcdcb0720230" (UID: "78fee2d5-85c6-48be-bc7f-bcdcb0720230"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.118292 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.121791 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78fee2d5-85c6-48be-bc7f-bcdcb0720230-kube-api-access-tkslj" (OuterVolumeSpecName: "kube-api-access-tkslj") pod "78fee2d5-85c6-48be-bc7f-bcdcb0720230" (UID: "78fee2d5-85c6-48be-bc7f-bcdcb0720230"). InnerVolumeSpecName "kube-api-access-tkslj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.220711 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkslj\" (UniqueName: \"kubernetes.io/projected/78fee2d5-85c6-48be-bc7f-bcdcb0720230-kube-api-access-tkslj\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.239046 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78fee2d5-85c6-48be-bc7f-bcdcb0720230" (UID: "78fee2d5-85c6-48be-bc7f-bcdcb0720230"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.322639 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78fee2d5-85c6-48be-bc7f-bcdcb0720230-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.369611 4808 generic.go:334] "Generic (PLEG): container finished" podID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerID="3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a" exitCode=0 Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.369655 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gsg" event={"ID":"78fee2d5-85c6-48be-bc7f-bcdcb0720230","Type":"ContainerDied","Data":"3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a"} Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.369690 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p7gsg" event={"ID":"78fee2d5-85c6-48be-bc7f-bcdcb0720230","Type":"ContainerDied","Data":"c7bdfc2fd5f40c6a9fd9e74ee22160de04cc32cff6460663c59ebee846db84e6"} Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.369647 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p7gsg" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.369712 4808 scope.go:117] "RemoveContainer" containerID="3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.411601 4808 scope.go:117] "RemoveContainer" containerID="a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.416056 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p7gsg"] Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.438135 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p7gsg"] Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.457831 4808 scope.go:117] "RemoveContainer" containerID="a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.485535 4808 scope.go:117] "RemoveContainer" containerID="3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a" Feb 17 16:49:30 crc kubenswrapper[4808]: E0217 16:49:30.485861 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a\": container with ID starting with 3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a not found: ID does not exist" containerID="3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.485893 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a"} err="failed to get container status \"3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a\": rpc error: code = NotFound desc = could not find container \"3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a\": container with ID starting with 3759364be8b05f033434157d113ec3e3045aefb7ca60068d18073c5b8d33762a not found: ID does not exist" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.485914 4808 scope.go:117] "RemoveContainer" containerID="a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2" Feb 17 16:49:30 crc kubenswrapper[4808]: E0217 16:49:30.486208 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2\": container with ID starting with a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2 not found: ID does not exist" containerID="a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.486230 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2"} err="failed to get container status \"a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2\": rpc error: code = NotFound desc = could not find container \"a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2\": container with ID starting with a8f094f2bfd8f10f743b554fde672e9f5ad03d309530070a4481f63088f499e2 not found: ID does not exist" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.486245 4808 scope.go:117] "RemoveContainer" containerID="a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1" Feb 17 16:49:30 crc kubenswrapper[4808]: E0217 16:49:30.486616 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1\": container with ID starting with a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1 not found: ID does not exist" containerID="a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1" Feb 17 16:49:30 crc kubenswrapper[4808]: I0217 16:49:30.486656 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1"} err="failed to get container status \"a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1\": rpc error: code = NotFound desc = could not find container \"a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1\": container with ID starting with a4fd2a4323cf9e15599cd70d49d32a2eaffec7fc1158a739bb67c40420264af1 not found: ID does not exist" Feb 17 16:49:31 crc kubenswrapper[4808]: I0217 16:49:31.167902 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" path="/var/lib/kubelet/pods/78fee2d5-85c6-48be-bc7f-bcdcb0720230/volumes" Feb 17 16:49:34 crc kubenswrapper[4808]: E0217 16:49:34.147877 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:49:37 crc kubenswrapper[4808]: E0217 16:49:37.157257 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:49:45 crc kubenswrapper[4808]: E0217 16:49:45.148196 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.082906 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hpcqt"] Feb 17 16:49:47 crc kubenswrapper[4808]: E0217 16:49:47.083338 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="registry-server" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.083352 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="registry-server" Feb 17 16:49:47 crc kubenswrapper[4808]: E0217 16:49:47.083372 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="extract-content" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.083397 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="extract-content" Feb 17 16:49:47 crc kubenswrapper[4808]: E0217 16:49:47.083434 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="extract-utilities" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.083444 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="extract-utilities" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.083667 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="78fee2d5-85c6-48be-bc7f-bcdcb0720230" containerName="registry-server" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.090236 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.121182 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpcqt"] Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.236993 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqgn8\" (UniqueName: \"kubernetes.io/projected/376f1060-b0d7-4a70-8d5d-6ce46dd99721-kube-api-access-zqgn8\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.237644 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-catalog-content\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.237876 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-utilities\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.339394 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-catalog-content\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.339589 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-utilities\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.339637 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqgn8\" (UniqueName: \"kubernetes.io/projected/376f1060-b0d7-4a70-8d5d-6ce46dd99721-kube-api-access-zqgn8\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.340008 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-catalog-content\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.340338 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-utilities\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.365488 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqgn8\" (UniqueName: \"kubernetes.io/projected/376f1060-b0d7-4a70-8d5d-6ce46dd99721-kube-api-access-zqgn8\") pod \"redhat-marketplace-hpcqt\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.422977 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:47 crc kubenswrapper[4808]: I0217 16:49:47.896511 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpcqt"] Feb 17 16:49:48 crc kubenswrapper[4808]: I0217 16:49:48.148683 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:49:48 crc kubenswrapper[4808]: E0217 16:49:48.280214 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:49:48 crc kubenswrapper[4808]: E0217 16:49:48.280292 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:49:48 crc kubenswrapper[4808]: E0217 16:49:48.280502 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:49:48 crc kubenswrapper[4808]: E0217 16:49:48.281994 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:49:48 crc kubenswrapper[4808]: I0217 16:49:48.581665 4808 generic.go:334] "Generic (PLEG): container finished" podID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerID="8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40" exitCode=0 Feb 17 16:49:48 crc kubenswrapper[4808]: I0217 16:49:48.581746 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpcqt" event={"ID":"376f1060-b0d7-4a70-8d5d-6ce46dd99721","Type":"ContainerDied","Data":"8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40"} Feb 17 16:49:48 crc kubenswrapper[4808]: I0217 16:49:48.581792 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpcqt" event={"ID":"376f1060-b0d7-4a70-8d5d-6ce46dd99721","Type":"ContainerStarted","Data":"d194a15df8fb9a4340820ef784455320a798c31ce9ae86a22448ec96ceaf49bb"} Feb 17 16:49:49 crc kubenswrapper[4808]: I0217 16:49:49.594377 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpcqt" event={"ID":"376f1060-b0d7-4a70-8d5d-6ce46dd99721","Type":"ContainerStarted","Data":"6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3"} Feb 17 16:49:49 crc kubenswrapper[4808]: I0217 16:49:49.596853 4808 generic.go:334] "Generic (PLEG): container finished" podID="d178dfcd-66d8-40ba-b740-909fe6e081ac" containerID="29d16363f6fa98f265f09c289debfecc64d954c62ee36d69f30d4932fce9caae" exitCode=2 Feb 17 16:49:49 crc kubenswrapper[4808]: I0217 16:49:49.596909 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" event={"ID":"d178dfcd-66d8-40ba-b740-909fe6e081ac","Type":"ContainerDied","Data":"29d16363f6fa98f265f09c289debfecc64d954c62ee36d69f30d4932fce9caae"} Feb 17 16:49:50 crc kubenswrapper[4808]: I0217 16:49:50.613441 4808 generic.go:334] "Generic (PLEG): container finished" podID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerID="6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3" exitCode=0 Feb 17 16:49:50 crc kubenswrapper[4808]: I0217 16:49:50.613521 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpcqt" event={"ID":"376f1060-b0d7-4a70-8d5d-6ce46dd99721","Type":"ContainerDied","Data":"6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3"} Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.257004 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.343687 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-ssh-key-openstack-edpm-ipam\") pod \"d178dfcd-66d8-40ba-b740-909fe6e081ac\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.343787 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-inventory\") pod \"d178dfcd-66d8-40ba-b740-909fe6e081ac\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.344625 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bjw8\" (UniqueName: \"kubernetes.io/projected/d178dfcd-66d8-40ba-b740-909fe6e081ac-kube-api-access-9bjw8\") pod \"d178dfcd-66d8-40ba-b740-909fe6e081ac\" (UID: \"d178dfcd-66d8-40ba-b740-909fe6e081ac\") " Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.353319 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d178dfcd-66d8-40ba-b740-909fe6e081ac-kube-api-access-9bjw8" (OuterVolumeSpecName: "kube-api-access-9bjw8") pod "d178dfcd-66d8-40ba-b740-909fe6e081ac" (UID: "d178dfcd-66d8-40ba-b740-909fe6e081ac"). InnerVolumeSpecName "kube-api-access-9bjw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.378352 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-inventory" (OuterVolumeSpecName: "inventory") pod "d178dfcd-66d8-40ba-b740-909fe6e081ac" (UID: "d178dfcd-66d8-40ba-b740-909fe6e081ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.390369 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d178dfcd-66d8-40ba-b740-909fe6e081ac" (UID: "d178dfcd-66d8-40ba-b740-909fe6e081ac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.447119 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bjw8\" (UniqueName: \"kubernetes.io/projected/d178dfcd-66d8-40ba-b740-909fe6e081ac-kube-api-access-9bjw8\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.447154 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.447185 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d178dfcd-66d8-40ba-b740-909fe6e081ac-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.628741 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" event={"ID":"d178dfcd-66d8-40ba-b740-909fe6e081ac","Type":"ContainerDied","Data":"beadab6c3a4b086c709ebcfa9079469f2ee23c30727b884ea9d18a17c5d65df6"} Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.629033 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beadab6c3a4b086c709ebcfa9079469f2ee23c30727b884ea9d18a17c5d65df6" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.628809 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv" Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.636853 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpcqt" event={"ID":"376f1060-b0d7-4a70-8d5d-6ce46dd99721","Type":"ContainerStarted","Data":"7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632"} Feb 17 16:49:51 crc kubenswrapper[4808]: I0217 16:49:51.659374 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hpcqt" podStartSLOduration=2.202733944 podStartE2EDuration="4.659358689s" podCreationTimestamp="2026-02-17 16:49:47 +0000 UTC" firstStartedPulling="2026-02-17 16:49:48.585064859 +0000 UTC m=+3352.101423952" lastFinishedPulling="2026-02-17 16:49:51.041689624 +0000 UTC m=+3354.558048697" observedRunningTime="2026-02-17 16:49:51.655936427 +0000 UTC m=+3355.172295500" watchObservedRunningTime="2026-02-17 16:49:51.659358689 +0000 UTC m=+3355.175717762" Feb 17 16:49:57 crc kubenswrapper[4808]: I0217 16:49:57.423915 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:57 crc kubenswrapper[4808]: I0217 16:49:57.424465 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:57 crc kubenswrapper[4808]: I0217 16:49:57.489985 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:57 crc kubenswrapper[4808]: I0217 16:49:57.730101 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:49:57 crc kubenswrapper[4808]: I0217 16:49:57.781970 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpcqt"] Feb 17 16:49:59 crc kubenswrapper[4808]: I0217 16:49:59.708332 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hpcqt" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="registry-server" containerID="cri-o://7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632" gracePeriod=2 Feb 17 16:50:00 crc kubenswrapper[4808]: E0217 16:50:00.148335 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.282137 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.461742 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-catalog-content\") pod \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.461822 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqgn8\" (UniqueName: \"kubernetes.io/projected/376f1060-b0d7-4a70-8d5d-6ce46dd99721-kube-api-access-zqgn8\") pod \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.462052 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-utilities\") pod \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\" (UID: \"376f1060-b0d7-4a70-8d5d-6ce46dd99721\") " Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.462704 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-utilities" (OuterVolumeSpecName: "utilities") pod "376f1060-b0d7-4a70-8d5d-6ce46dd99721" (UID: "376f1060-b0d7-4a70-8d5d-6ce46dd99721"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.467633 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/376f1060-b0d7-4a70-8d5d-6ce46dd99721-kube-api-access-zqgn8" (OuterVolumeSpecName: "kube-api-access-zqgn8") pod "376f1060-b0d7-4a70-8d5d-6ce46dd99721" (UID: "376f1060-b0d7-4a70-8d5d-6ce46dd99721"). InnerVolumeSpecName "kube-api-access-zqgn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.485254 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "376f1060-b0d7-4a70-8d5d-6ce46dd99721" (UID: "376f1060-b0d7-4a70-8d5d-6ce46dd99721"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.564730 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.564775 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376f1060-b0d7-4a70-8d5d-6ce46dd99721-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.564794 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqgn8\" (UniqueName: \"kubernetes.io/projected/376f1060-b0d7-4a70-8d5d-6ce46dd99721-kube-api-access-zqgn8\") on node \"crc\" DevicePath \"\"" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.723033 4808 generic.go:334] "Generic (PLEG): container finished" podID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerID="7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632" exitCode=0 Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.723096 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpcqt" event={"ID":"376f1060-b0d7-4a70-8d5d-6ce46dd99721","Type":"ContainerDied","Data":"7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632"} Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.723122 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpcqt" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.723149 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpcqt" event={"ID":"376f1060-b0d7-4a70-8d5d-6ce46dd99721","Type":"ContainerDied","Data":"d194a15df8fb9a4340820ef784455320a798c31ce9ae86a22448ec96ceaf49bb"} Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.723179 4808 scope.go:117] "RemoveContainer" containerID="7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.779682 4808 scope.go:117] "RemoveContainer" containerID="6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.787289 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpcqt"] Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.800444 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpcqt"] Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.805631 4808 scope.go:117] "RemoveContainer" containerID="8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.873304 4808 scope.go:117] "RemoveContainer" containerID="7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632" Feb 17 16:50:00 crc kubenswrapper[4808]: E0217 16:50:00.873854 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632\": container with ID starting with 7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632 not found: ID does not exist" containerID="7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.873906 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632"} err="failed to get container status \"7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632\": rpc error: code = NotFound desc = could not find container \"7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632\": container with ID starting with 7b1126a69cdd91866ac5c85667d69a849292acb693965f5dacaf850152596632 not found: ID does not exist" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.873934 4808 scope.go:117] "RemoveContainer" containerID="6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3" Feb 17 16:50:00 crc kubenswrapper[4808]: E0217 16:50:00.874265 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3\": container with ID starting with 6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3 not found: ID does not exist" containerID="6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.874317 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3"} err="failed to get container status \"6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3\": rpc error: code = NotFound desc = could not find container \"6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3\": container with ID starting with 6fe4dd82b1875674fd59687e40bffbc7f31da63004d13abe9e64a3273979ebc3 not found: ID does not exist" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.874349 4808 scope.go:117] "RemoveContainer" containerID="8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40" Feb 17 16:50:00 crc kubenswrapper[4808]: E0217 16:50:00.874614 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40\": container with ID starting with 8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40 not found: ID does not exist" containerID="8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40" Feb 17 16:50:00 crc kubenswrapper[4808]: I0217 16:50:00.874641 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40"} err="failed to get container status \"8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40\": rpc error: code = NotFound desc = could not find container \"8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40\": container with ID starting with 8f156eab1b7f76de86b6dee1414bbbba30b38fd134afd08463c950f30d1e3d40 not found: ID does not exist" Feb 17 16:50:01 crc kubenswrapper[4808]: E0217 16:50:01.149056 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:50:01 crc kubenswrapper[4808]: I0217 16:50:01.162703 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" path="/var/lib/kubelet/pods/376f1060-b0d7-4a70-8d5d-6ce46dd99721/volumes" Feb 17 16:50:12 crc kubenswrapper[4808]: E0217 16:50:12.149317 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:50:14 crc kubenswrapper[4808]: E0217 16:50:14.147594 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:50:24 crc kubenswrapper[4808]: E0217 16:50:24.147067 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:50:28 crc kubenswrapper[4808]: E0217 16:50:28.293721 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:50:28 crc kubenswrapper[4808]: E0217 16:50:28.294412 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:50:28 crc kubenswrapper[4808]: E0217 16:50:28.294633 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:50:28 crc kubenswrapper[4808]: E0217 16:50:28.296026 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:50:37 crc kubenswrapper[4808]: E0217 16:50:37.155067 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:50:40 crc kubenswrapper[4808]: E0217 16:50:40.148313 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:50:49 crc kubenswrapper[4808]: E0217 16:50:49.147380 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:50:51 crc kubenswrapper[4808]: I0217 16:50:51.592267 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:50:51 crc kubenswrapper[4808]: I0217 16:50:51.592918 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:50:52 crc kubenswrapper[4808]: E0217 16:50:52.148546 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:51:01 crc kubenswrapper[4808]: E0217 16:51:01.148807 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:51:07 crc kubenswrapper[4808]: E0217 16:51:07.161161 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.039665 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w"] Feb 17 16:51:09 crc kubenswrapper[4808]: E0217 16:51:09.047640 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d178dfcd-66d8-40ba-b740-909fe6e081ac" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.047799 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d178dfcd-66d8-40ba-b740-909fe6e081ac" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:51:09 crc kubenswrapper[4808]: E0217 16:51:09.047970 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="registry-server" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.048087 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="registry-server" Feb 17 16:51:09 crc kubenswrapper[4808]: E0217 16:51:09.048204 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="extract-utilities" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.048318 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="extract-utilities" Feb 17 16:51:09 crc kubenswrapper[4808]: E0217 16:51:09.048465 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="extract-content" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.048634 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="extract-content" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.049154 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d178dfcd-66d8-40ba-b740-909fe6e081ac" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.049321 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="376f1060-b0d7-4a70-8d5d-6ce46dd99721" containerName="registry-server" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.050666 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.056273 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w"] Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.089695 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.089932 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.089806 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.089695 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.138358 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.138633 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrlwl\" (UniqueName: \"kubernetes.io/projected/11efc7ce-322d-4bfe-95ad-c84d779a80d8-kube-api-access-xrlwl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.138767 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.240665 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.240769 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrlwl\" (UniqueName: \"kubernetes.io/projected/11efc7ce-322d-4bfe-95ad-c84d779a80d8-kube-api-access-xrlwl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.240805 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.250973 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.256551 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.257229 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrlwl\" (UniqueName: \"kubernetes.io/projected/11efc7ce-322d-4bfe-95ad-c84d779a80d8-kube-api-access-xrlwl\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.408446 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:51:09 crc kubenswrapper[4808]: I0217 16:51:09.975404 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w"] Feb 17 16:51:10 crc kubenswrapper[4808]: I0217 16:51:10.558078 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" event={"ID":"11efc7ce-322d-4bfe-95ad-c84d779a80d8","Type":"ContainerStarted","Data":"4d7afca44c0ce541015a9eaa5dd29ff4546d0353ecc28cb2a4ccb253fd063a02"} Feb 17 16:51:11 crc kubenswrapper[4808]: I0217 16:51:11.569915 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" event={"ID":"11efc7ce-322d-4bfe-95ad-c84d779a80d8","Type":"ContainerStarted","Data":"eda4c8fb0a2fa7440b4edbd3589d922c68fac2ff1d127cf6afae08986f0dcae1"} Feb 17 16:51:11 crc kubenswrapper[4808]: I0217 16:51:11.595961 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" podStartSLOduration=2.153054559 podStartE2EDuration="2.595926054s" podCreationTimestamp="2026-02-17 16:51:09 +0000 UTC" firstStartedPulling="2026-02-17 16:51:09.966545458 +0000 UTC m=+3433.482904541" lastFinishedPulling="2026-02-17 16:51:10.409416953 +0000 UTC m=+3433.925776036" observedRunningTime="2026-02-17 16:51:11.587525899 +0000 UTC m=+3435.103884992" watchObservedRunningTime="2026-02-17 16:51:11.595926054 +0000 UTC m=+3435.112285177" Feb 17 16:51:16 crc kubenswrapper[4808]: E0217 16:51:16.148486 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:51:19 crc kubenswrapper[4808]: E0217 16:51:19.152054 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:51:21 crc kubenswrapper[4808]: I0217 16:51:21.592139 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:51:21 crc kubenswrapper[4808]: I0217 16:51:21.592934 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:51:29 crc kubenswrapper[4808]: E0217 16:51:29.147363 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:51:32 crc kubenswrapper[4808]: E0217 16:51:32.149822 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:51:40 crc kubenswrapper[4808]: E0217 16:51:40.148741 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:51:43 crc kubenswrapper[4808]: E0217 16:51:43.148985 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:51:51 crc kubenswrapper[4808]: I0217 16:51:51.592377 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:51:51 crc kubenswrapper[4808]: I0217 16:51:51.592992 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:51:51 crc kubenswrapper[4808]: I0217 16:51:51.593044 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:51:51 crc kubenswrapper[4808]: I0217 16:51:51.593942 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a8ba27f36ba0ee53790b7b2ad1919c83731b5c9274456151ce2d8a4df4fea50"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:51:51 crc kubenswrapper[4808]: I0217 16:51:51.594005 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://2a8ba27f36ba0ee53790b7b2ad1919c83731b5c9274456151ce2d8a4df4fea50" gracePeriod=600 Feb 17 16:51:52 crc kubenswrapper[4808]: I0217 16:51:52.016747 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="2a8ba27f36ba0ee53790b7b2ad1919c83731b5c9274456151ce2d8a4df4fea50" exitCode=0 Feb 17 16:51:52 crc kubenswrapper[4808]: I0217 16:51:52.016856 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"2a8ba27f36ba0ee53790b7b2ad1919c83731b5c9274456151ce2d8a4df4fea50"} Feb 17 16:51:52 crc kubenswrapper[4808]: I0217 16:51:52.017155 4808 scope.go:117] "RemoveContainer" containerID="1d6b62da85cac0888e68836087131544de96c37066f3fa481bdeda1d95bfa143" Feb 17 16:51:53 crc kubenswrapper[4808]: E0217 16:51:53.158259 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:51:54 crc kubenswrapper[4808]: I0217 16:51:54.038916 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5"} Feb 17 16:51:55 crc kubenswrapper[4808]: E0217 16:51:55.148350 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:52:05 crc kubenswrapper[4808]: E0217 16:52:05.148349 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:52:07 crc kubenswrapper[4808]: E0217 16:52:07.158925 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:52:18 crc kubenswrapper[4808]: E0217 16:52:18.148833 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:52:20 crc kubenswrapper[4808]: E0217 16:52:20.149140 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.081677 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7cs6t"] Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.084688 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.099836 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rqx5\" (UniqueName: \"kubernetes.io/projected/5952700e-521a-4201-9352-33db5d11abf4-kube-api-access-7rqx5\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.099994 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-catalog-content\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.100054 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-utilities\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.109936 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cs6t"] Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.203112 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-catalog-content\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.203800 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-catalog-content\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.203904 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-utilities\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.204294 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-utilities\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.204459 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rqx5\" (UniqueName: \"kubernetes.io/projected/5952700e-521a-4201-9352-33db5d11abf4-kube-api-access-7rqx5\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.233676 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rqx5\" (UniqueName: \"kubernetes.io/projected/5952700e-521a-4201-9352-33db5d11abf4-kube-api-access-7rqx5\") pod \"community-operators-7cs6t\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.414257 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:23 crc kubenswrapper[4808]: I0217 16:52:23.960534 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7cs6t"] Feb 17 16:52:23 crc kubenswrapper[4808]: W0217 16:52:23.967364 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5952700e_521a_4201_9352_33db5d11abf4.slice/crio-37c98d9de299e8566c34e82d2758704d8f9b59e70d8144af01cba040ee87a286 WatchSource:0}: Error finding container 37c98d9de299e8566c34e82d2758704d8f9b59e70d8144af01cba040ee87a286: Status 404 returned error can't find the container with id 37c98d9de299e8566c34e82d2758704d8f9b59e70d8144af01cba040ee87a286 Feb 17 16:52:24 crc kubenswrapper[4808]: I0217 16:52:24.348733 4808 generic.go:334] "Generic (PLEG): container finished" podID="5952700e-521a-4201-9352-33db5d11abf4" containerID="6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f" exitCode=0 Feb 17 16:52:24 crc kubenswrapper[4808]: I0217 16:52:24.349105 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cs6t" event={"ID":"5952700e-521a-4201-9352-33db5d11abf4","Type":"ContainerDied","Data":"6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f"} Feb 17 16:52:24 crc kubenswrapper[4808]: I0217 16:52:24.349147 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cs6t" event={"ID":"5952700e-521a-4201-9352-33db5d11abf4","Type":"ContainerStarted","Data":"37c98d9de299e8566c34e82d2758704d8f9b59e70d8144af01cba040ee87a286"} Feb 17 16:52:25 crc kubenswrapper[4808]: I0217 16:52:25.359077 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cs6t" event={"ID":"5952700e-521a-4201-9352-33db5d11abf4","Type":"ContainerStarted","Data":"41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b"} Feb 17 16:52:26 crc kubenswrapper[4808]: I0217 16:52:26.370504 4808 generic.go:334] "Generic (PLEG): container finished" podID="5952700e-521a-4201-9352-33db5d11abf4" containerID="41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b" exitCode=0 Feb 17 16:52:26 crc kubenswrapper[4808]: I0217 16:52:26.370560 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cs6t" event={"ID":"5952700e-521a-4201-9352-33db5d11abf4","Type":"ContainerDied","Data":"41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b"} Feb 17 16:52:27 crc kubenswrapper[4808]: I0217 16:52:27.382489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cs6t" event={"ID":"5952700e-521a-4201-9352-33db5d11abf4","Type":"ContainerStarted","Data":"153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b"} Feb 17 16:52:27 crc kubenswrapper[4808]: I0217 16:52:27.412014 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7cs6t" podStartSLOduration=1.966009074 podStartE2EDuration="4.411992543s" podCreationTimestamp="2026-02-17 16:52:23 +0000 UTC" firstStartedPulling="2026-02-17 16:52:24.351358379 +0000 UTC m=+3507.867717452" lastFinishedPulling="2026-02-17 16:52:26.797341848 +0000 UTC m=+3510.313700921" observedRunningTime="2026-02-17 16:52:27.406019113 +0000 UTC m=+3510.922378226" watchObservedRunningTime="2026-02-17 16:52:27.411992543 +0000 UTC m=+3510.928351626" Feb 17 16:52:33 crc kubenswrapper[4808]: E0217 16:52:33.147355 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:52:33 crc kubenswrapper[4808]: E0217 16:52:33.147459 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:52:33 crc kubenswrapper[4808]: I0217 16:52:33.415370 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:33 crc kubenswrapper[4808]: I0217 16:52:33.415432 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:33 crc kubenswrapper[4808]: I0217 16:52:33.468664 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:33 crc kubenswrapper[4808]: I0217 16:52:33.519806 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:33 crc kubenswrapper[4808]: I0217 16:52:33.717979 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cs6t"] Feb 17 16:52:35 crc kubenswrapper[4808]: I0217 16:52:35.463864 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7cs6t" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="registry-server" containerID="cri-o://153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b" gracePeriod=2 Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.101391 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.196264 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rqx5\" (UniqueName: \"kubernetes.io/projected/5952700e-521a-4201-9352-33db5d11abf4-kube-api-access-7rqx5\") pod \"5952700e-521a-4201-9352-33db5d11abf4\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.196442 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-utilities\") pod \"5952700e-521a-4201-9352-33db5d11abf4\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.196484 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-catalog-content\") pod \"5952700e-521a-4201-9352-33db5d11abf4\" (UID: \"5952700e-521a-4201-9352-33db5d11abf4\") " Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.200487 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-utilities" (OuterVolumeSpecName: "utilities") pod "5952700e-521a-4201-9352-33db5d11abf4" (UID: "5952700e-521a-4201-9352-33db5d11abf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.208079 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5952700e-521a-4201-9352-33db5d11abf4-kube-api-access-7rqx5" (OuterVolumeSpecName: "kube-api-access-7rqx5") pod "5952700e-521a-4201-9352-33db5d11abf4" (UID: "5952700e-521a-4201-9352-33db5d11abf4"). InnerVolumeSpecName "kube-api-access-7rqx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.299645 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rqx5\" (UniqueName: \"kubernetes.io/projected/5952700e-521a-4201-9352-33db5d11abf4-kube-api-access-7rqx5\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.299684 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.436143 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5952700e-521a-4201-9352-33db5d11abf4" (UID: "5952700e-521a-4201-9352-33db5d11abf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.485056 4808 generic.go:334] "Generic (PLEG): container finished" podID="5952700e-521a-4201-9352-33db5d11abf4" containerID="153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b" exitCode=0 Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.485111 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cs6t" event={"ID":"5952700e-521a-4201-9352-33db5d11abf4","Type":"ContainerDied","Data":"153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b"} Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.485154 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7cs6t" event={"ID":"5952700e-521a-4201-9352-33db5d11abf4","Type":"ContainerDied","Data":"37c98d9de299e8566c34e82d2758704d8f9b59e70d8144af01cba040ee87a286"} Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.485176 4808 scope.go:117] "RemoveContainer" containerID="153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.486520 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7cs6t" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.506424 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5952700e-521a-4201-9352-33db5d11abf4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.509609 4808 scope.go:117] "RemoveContainer" containerID="41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.532265 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7cs6t"] Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.541821 4808 scope.go:117] "RemoveContainer" containerID="6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.544158 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7cs6t"] Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.579219 4808 scope.go:117] "RemoveContainer" containerID="153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b" Feb 17 16:52:36 crc kubenswrapper[4808]: E0217 16:52:36.579836 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b\": container with ID starting with 153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b not found: ID does not exist" containerID="153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.579922 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b"} err="failed to get container status \"153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b\": rpc error: code = NotFound desc = could not find container \"153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b\": container with ID starting with 153d3a19ce025670bd8c5af0343a9602ff029535a3e6df8b43c60f6bfe57dc9b not found: ID does not exist" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.579990 4808 scope.go:117] "RemoveContainer" containerID="41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b" Feb 17 16:52:36 crc kubenswrapper[4808]: E0217 16:52:36.580381 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b\": container with ID starting with 41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b not found: ID does not exist" containerID="41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.580435 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b"} err="failed to get container status \"41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b\": rpc error: code = NotFound desc = could not find container \"41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b\": container with ID starting with 41460f1113d1536dd9edd491a988a7dd8cf67317bb755f3a42694ee4db124b0b not found: ID does not exist" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.580461 4808 scope.go:117] "RemoveContainer" containerID="6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f" Feb 17 16:52:36 crc kubenswrapper[4808]: E0217 16:52:36.580859 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f\": container with ID starting with 6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f not found: ID does not exist" containerID="6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f" Feb 17 16:52:36 crc kubenswrapper[4808]: I0217 16:52:36.580939 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f"} err="failed to get container status \"6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f\": rpc error: code = NotFound desc = could not find container \"6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f\": container with ID starting with 6223c29dde7884b6815555877c62366f852f82bee876d646dfc281bd4c82062f not found: ID does not exist" Feb 17 16:52:37 crc kubenswrapper[4808]: I0217 16:52:37.164093 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5952700e-521a-4201-9352-33db5d11abf4" path="/var/lib/kubelet/pods/5952700e-521a-4201-9352-33db5d11abf4/volumes" Feb 17 16:52:45 crc kubenswrapper[4808]: E0217 16:52:45.150271 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:52:48 crc kubenswrapper[4808]: E0217 16:52:48.147936 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:52:57 crc kubenswrapper[4808]: E0217 16:52:57.162277 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:53:03 crc kubenswrapper[4808]: E0217 16:53:03.147868 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:53:08 crc kubenswrapper[4808]: E0217 16:53:08.148950 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:53:18 crc kubenswrapper[4808]: E0217 16:53:18.147825 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:53:20 crc kubenswrapper[4808]: E0217 16:53:20.147120 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:53:31 crc kubenswrapper[4808]: E0217 16:53:31.148149 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:53:33 crc kubenswrapper[4808]: E0217 16:53:33.148411 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:53:45 crc kubenswrapper[4808]: E0217 16:53:45.147921 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:53:48 crc kubenswrapper[4808]: E0217 16:53:48.148518 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:54:00 crc kubenswrapper[4808]: E0217 16:54:00.147814 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:54:01 crc kubenswrapper[4808]: E0217 16:54:01.147347 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:54:15 crc kubenswrapper[4808]: E0217 16:54:15.149937 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:54:16 crc kubenswrapper[4808]: E0217 16:54:16.148076 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:54:21 crc kubenswrapper[4808]: I0217 16:54:21.592503 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:54:21 crc kubenswrapper[4808]: I0217 16:54:21.593071 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:54:26 crc kubenswrapper[4808]: E0217 16:54:26.148414 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:54:31 crc kubenswrapper[4808]: E0217 16:54:31.149373 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:54:39 crc kubenswrapper[4808]: E0217 16:54:39.149175 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:54:45 crc kubenswrapper[4808]: E0217 16:54:45.149015 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:54:51 crc kubenswrapper[4808]: I0217 16:54:51.592547 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:54:51 crc kubenswrapper[4808]: I0217 16:54:51.593077 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:54:54 crc kubenswrapper[4808]: E0217 16:54:54.150030 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:54:58 crc kubenswrapper[4808]: I0217 16:54:58.148232 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 16:54:58 crc kubenswrapper[4808]: E0217 16:54:58.248359 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:54:58 crc kubenswrapper[4808]: E0217 16:54:58.248420 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 16:54:58 crc kubenswrapper[4808]: E0217 16:54:58.248649 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:54:58 crc kubenswrapper[4808]: E0217 16:54:58.249851 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:55:07 crc kubenswrapper[4808]: E0217 16:55:07.156762 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:55:11 crc kubenswrapper[4808]: E0217 16:55:11.146974 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:55:21 crc kubenswrapper[4808]: E0217 16:55:21.149039 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:55:21 crc kubenswrapper[4808]: I0217 16:55:21.592071 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 16:55:21 crc kubenswrapper[4808]: I0217 16:55:21.592131 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 16:55:21 crc kubenswrapper[4808]: I0217 16:55:21.592173 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 16:55:21 crc kubenswrapper[4808]: I0217 16:55:21.592981 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 16:55:21 crc kubenswrapper[4808]: I0217 16:55:21.593049 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" gracePeriod=600 Feb 17 16:55:21 crc kubenswrapper[4808]: E0217 16:55:21.720839 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:55:22 crc kubenswrapper[4808]: I0217 16:55:22.231418 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" exitCode=0 Feb 17 16:55:22 crc kubenswrapper[4808]: I0217 16:55:22.231477 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5"} Feb 17 16:55:22 crc kubenswrapper[4808]: I0217 16:55:22.231544 4808 scope.go:117] "RemoveContainer" containerID="2a8ba27f36ba0ee53790b7b2ad1919c83731b5c9274456151ce2d8a4df4fea50" Feb 17 16:55:22 crc kubenswrapper[4808]: I0217 16:55:22.232530 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:55:22 crc kubenswrapper[4808]: E0217 16:55:22.232879 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:55:26 crc kubenswrapper[4808]: E0217 16:55:26.148494 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:55:35 crc kubenswrapper[4808]: E0217 16:55:35.276851 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:55:35 crc kubenswrapper[4808]: E0217 16:55:35.277486 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 16:55:35 crc kubenswrapper[4808]: E0217 16:55:35.277668 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 16:55:35 crc kubenswrapper[4808]: E0217 16:55:35.279204 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:55:37 crc kubenswrapper[4808]: I0217 16:55:37.157643 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:55:37 crc kubenswrapper[4808]: E0217 16:55:37.158292 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:55:39 crc kubenswrapper[4808]: E0217 16:55:39.149408 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:55:46 crc kubenswrapper[4808]: E0217 16:55:46.147864 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:55:51 crc kubenswrapper[4808]: E0217 16:55:51.148287 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:55:52 crc kubenswrapper[4808]: I0217 16:55:52.146300 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:55:52 crc kubenswrapper[4808]: E0217 16:55:52.147001 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:56:01 crc kubenswrapper[4808]: E0217 16:56:01.148638 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:56:05 crc kubenswrapper[4808]: E0217 16:56:05.148687 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:56:07 crc kubenswrapper[4808]: I0217 16:56:07.161706 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:56:07 crc kubenswrapper[4808]: E0217 16:56:07.162377 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:56:16 crc kubenswrapper[4808]: E0217 16:56:16.148197 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:56:18 crc kubenswrapper[4808]: E0217 16:56:18.148303 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:56:22 crc kubenswrapper[4808]: I0217 16:56:22.146326 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:56:22 crc kubenswrapper[4808]: E0217 16:56:22.147065 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:56:30 crc kubenswrapper[4808]: E0217 16:56:30.147890 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:56:30 crc kubenswrapper[4808]: E0217 16:56:30.147931 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:56:34 crc kubenswrapper[4808]: I0217 16:56:34.146386 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:56:34 crc kubenswrapper[4808]: E0217 16:56:34.147648 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:56:41 crc kubenswrapper[4808]: E0217 16:56:41.150417 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:56:45 crc kubenswrapper[4808]: E0217 16:56:45.149535 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:56:47 crc kubenswrapper[4808]: I0217 16:56:47.156226 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:56:47 crc kubenswrapper[4808]: E0217 16:56:47.157040 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:56:55 crc kubenswrapper[4808]: E0217 16:56:55.149979 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:57:00 crc kubenswrapper[4808]: E0217 16:57:00.150198 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:57:02 crc kubenswrapper[4808]: I0217 16:57:02.148186 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:57:02 crc kubenswrapper[4808]: E0217 16:57:02.148910 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:57:08 crc kubenswrapper[4808]: E0217 16:57:08.148465 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:57:14 crc kubenswrapper[4808]: I0217 16:57:14.146642 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:57:14 crc kubenswrapper[4808]: E0217 16:57:14.147715 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:57:14 crc kubenswrapper[4808]: E0217 16:57:14.148393 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:57:22 crc kubenswrapper[4808]: I0217 16:57:22.513981 4808 generic.go:334] "Generic (PLEG): container finished" podID="11efc7ce-322d-4bfe-95ad-c84d779a80d8" containerID="eda4c8fb0a2fa7440b4edbd3589d922c68fac2ff1d127cf6afae08986f0dcae1" exitCode=2 Feb 17 16:57:22 crc kubenswrapper[4808]: I0217 16:57:22.514203 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" event={"ID":"11efc7ce-322d-4bfe-95ad-c84d779a80d8","Type":"ContainerDied","Data":"eda4c8fb0a2fa7440b4edbd3589d922c68fac2ff1d127cf6afae08986f0dcae1"} Feb 17 16:57:23 crc kubenswrapper[4808]: E0217 16:57:23.148822 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.129629 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.294127 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-ssh-key-openstack-edpm-ipam\") pod \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.294288 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-inventory\") pod \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.294345 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrlwl\" (UniqueName: \"kubernetes.io/projected/11efc7ce-322d-4bfe-95ad-c84d779a80d8-kube-api-access-xrlwl\") pod \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\" (UID: \"11efc7ce-322d-4bfe-95ad-c84d779a80d8\") " Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.302082 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11efc7ce-322d-4bfe-95ad-c84d779a80d8-kube-api-access-xrlwl" (OuterVolumeSpecName: "kube-api-access-xrlwl") pod "11efc7ce-322d-4bfe-95ad-c84d779a80d8" (UID: "11efc7ce-322d-4bfe-95ad-c84d779a80d8"). InnerVolumeSpecName "kube-api-access-xrlwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.330207 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-inventory" (OuterVolumeSpecName: "inventory") pod "11efc7ce-322d-4bfe-95ad-c84d779a80d8" (UID: "11efc7ce-322d-4bfe-95ad-c84d779a80d8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.340374 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "11efc7ce-322d-4bfe-95ad-c84d779a80d8" (UID: "11efc7ce-322d-4bfe-95ad-c84d779a80d8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.397358 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.397404 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11efc7ce-322d-4bfe-95ad-c84d779a80d8-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.397417 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrlwl\" (UniqueName: \"kubernetes.io/projected/11efc7ce-322d-4bfe-95ad-c84d779a80d8-kube-api-access-xrlwl\") on node \"crc\" DevicePath \"\"" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.536735 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" event={"ID":"11efc7ce-322d-4bfe-95ad-c84d779a80d8","Type":"ContainerDied","Data":"4d7afca44c0ce541015a9eaa5dd29ff4546d0353ecc28cb2a4ccb253fd063a02"} Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.536781 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d7afca44c0ce541015a9eaa5dd29ff4546d0353ecc28cb2a4ccb253fd063a02" Feb 17 16:57:24 crc kubenswrapper[4808]: I0217 16:57:24.536816 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w" Feb 17 16:57:25 crc kubenswrapper[4808]: E0217 16:57:25.148176 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:57:27 crc kubenswrapper[4808]: I0217 16:57:27.151691 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:57:27 crc kubenswrapper[4808]: E0217 16:57:27.152386 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:57:37 crc kubenswrapper[4808]: E0217 16:57:37.156709 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:57:38 crc kubenswrapper[4808]: E0217 16:57:38.147313 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:57:42 crc kubenswrapper[4808]: I0217 16:57:42.146086 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:57:42 crc kubenswrapper[4808]: E0217 16:57:42.147136 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:57:50 crc kubenswrapper[4808]: E0217 16:57:50.149309 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:57:51 crc kubenswrapper[4808]: E0217 16:57:51.148294 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:57:53 crc kubenswrapper[4808]: I0217 16:57:53.145719 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:57:53 crc kubenswrapper[4808]: E0217 16:57:53.146293 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:58:01 crc kubenswrapper[4808]: E0217 16:58:01.148315 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:58:03 crc kubenswrapper[4808]: E0217 16:58:03.147671 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:58:05 crc kubenswrapper[4808]: I0217 16:58:05.146479 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:58:05 crc kubenswrapper[4808]: E0217 16:58:05.147103 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:58:14 crc kubenswrapper[4808]: E0217 16:58:14.149225 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:58:16 crc kubenswrapper[4808]: E0217 16:58:16.148037 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:58:19 crc kubenswrapper[4808]: I0217 16:58:19.146754 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:58:19 crc kubenswrapper[4808]: E0217 16:58:19.147461 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:58:27 crc kubenswrapper[4808]: E0217 16:58:27.158865 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:58:28 crc kubenswrapper[4808]: E0217 16:58:28.155938 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:58:30 crc kubenswrapper[4808]: I0217 16:58:30.147908 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:58:30 crc kubenswrapper[4808]: E0217 16:58:30.148734 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:58:39 crc kubenswrapper[4808]: E0217 16:58:39.149132 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:58:39 crc kubenswrapper[4808]: E0217 16:58:39.149800 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:58:44 crc kubenswrapper[4808]: I0217 16:58:44.145186 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:58:44 crc kubenswrapper[4808]: E0217 16:58:44.145785 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.568517 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-szhdh"] Feb 17 16:58:49 crc kubenswrapper[4808]: E0217 16:58:49.569971 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="extract-content" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.570000 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="extract-content" Feb 17 16:58:49 crc kubenswrapper[4808]: E0217 16:58:49.570031 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11efc7ce-322d-4bfe-95ad-c84d779a80d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.570045 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="11efc7ce-322d-4bfe-95ad-c84d779a80d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:58:49 crc kubenswrapper[4808]: E0217 16:58:49.570091 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="registry-server" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.570103 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="registry-server" Feb 17 16:58:49 crc kubenswrapper[4808]: E0217 16:58:49.570171 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="extract-utilities" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.570190 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="extract-utilities" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.570636 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5952700e-521a-4201-9352-33db5d11abf4" containerName="registry-server" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.570696 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="11efc7ce-322d-4bfe-95ad-c84d779a80d8" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.573465 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.583417 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-szhdh"] Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.685382 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-catalog-content\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.685423 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-utilities\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.686035 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czltc\" (UniqueName: \"kubernetes.io/projected/740e9eba-2f31-48f8-af0e-68aec31e27cf-kube-api-access-czltc\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.787612 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czltc\" (UniqueName: \"kubernetes.io/projected/740e9eba-2f31-48f8-af0e-68aec31e27cf-kube-api-access-czltc\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.787690 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-catalog-content\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.787715 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-utilities\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.788161 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-catalog-content\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.788176 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-utilities\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.809752 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czltc\" (UniqueName: \"kubernetes.io/projected/740e9eba-2f31-48f8-af0e-68aec31e27cf-kube-api-access-czltc\") pod \"certified-operators-szhdh\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:49 crc kubenswrapper[4808]: I0217 16:58:49.903595 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:50 crc kubenswrapper[4808]: E0217 16:58:50.149755 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:58:50 crc kubenswrapper[4808]: E0217 16:58:50.149982 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:58:50 crc kubenswrapper[4808]: I0217 16:58:50.437683 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-szhdh"] Feb 17 16:58:51 crc kubenswrapper[4808]: I0217 16:58:51.436319 4808 generic.go:334] "Generic (PLEG): container finished" podID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerID="0f7be76c253b421188bbb3b738a02d69e75584ea443f6d666f3927a89f0359d4" exitCode=0 Feb 17 16:58:51 crc kubenswrapper[4808]: I0217 16:58:51.436657 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-szhdh" event={"ID":"740e9eba-2f31-48f8-af0e-68aec31e27cf","Type":"ContainerDied","Data":"0f7be76c253b421188bbb3b738a02d69e75584ea443f6d666f3927a89f0359d4"} Feb 17 16:58:51 crc kubenswrapper[4808]: I0217 16:58:51.436688 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-szhdh" event={"ID":"740e9eba-2f31-48f8-af0e-68aec31e27cf","Type":"ContainerStarted","Data":"9e069878c6614ce22e9d278c679f49524cf425a1cd6c9df95a316782240123ee"} Feb 17 16:58:52 crc kubenswrapper[4808]: I0217 16:58:52.447384 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-szhdh" event={"ID":"740e9eba-2f31-48f8-af0e-68aec31e27cf","Type":"ContainerStarted","Data":"5e8850501eb79a3ea1c89c761415222512c2f195ce6edc451621d50b059d2db2"} Feb 17 16:58:53 crc kubenswrapper[4808]: I0217 16:58:53.456003 4808 generic.go:334] "Generic (PLEG): container finished" podID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerID="5e8850501eb79a3ea1c89c761415222512c2f195ce6edc451621d50b059d2db2" exitCode=0 Feb 17 16:58:53 crc kubenswrapper[4808]: I0217 16:58:53.456086 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-szhdh" event={"ID":"740e9eba-2f31-48f8-af0e-68aec31e27cf","Type":"ContainerDied","Data":"5e8850501eb79a3ea1c89c761415222512c2f195ce6edc451621d50b059d2db2"} Feb 17 16:58:54 crc kubenswrapper[4808]: I0217 16:58:54.470413 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-szhdh" event={"ID":"740e9eba-2f31-48f8-af0e-68aec31e27cf","Type":"ContainerStarted","Data":"7365620845db54ba879f3622835dda751053aefedf606fd24aaeff794ccfed44"} Feb 17 16:58:54 crc kubenswrapper[4808]: I0217 16:58:54.500660 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-szhdh" podStartSLOduration=3.014932426 podStartE2EDuration="5.500643293s" podCreationTimestamp="2026-02-17 16:58:49 +0000 UTC" firstStartedPulling="2026-02-17 16:58:51.438606416 +0000 UTC m=+3894.954965489" lastFinishedPulling="2026-02-17 16:58:53.924317263 +0000 UTC m=+3897.440676356" observedRunningTime="2026-02-17 16:58:54.490697773 +0000 UTC m=+3898.007056856" watchObservedRunningTime="2026-02-17 16:58:54.500643293 +0000 UTC m=+3898.017002366" Feb 17 16:58:57 crc kubenswrapper[4808]: I0217 16:58:57.157681 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:58:57 crc kubenswrapper[4808]: E0217 16:58:57.158675 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:58:59 crc kubenswrapper[4808]: I0217 16:58:59.903858 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:59 crc kubenswrapper[4808]: I0217 16:58:59.904317 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:58:59 crc kubenswrapper[4808]: I0217 16:58:59.959836 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:59:00 crc kubenswrapper[4808]: I0217 16:59:00.585890 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:59:00 crc kubenswrapper[4808]: I0217 16:59:00.634627 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-szhdh"] Feb 17 16:59:02 crc kubenswrapper[4808]: I0217 16:59:02.552069 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-szhdh" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="registry-server" containerID="cri-o://7365620845db54ba879f3622835dda751053aefedf606fd24aaeff794ccfed44" gracePeriod=2 Feb 17 16:59:03 crc kubenswrapper[4808]: I0217 16:59:03.567942 4808 generic.go:334] "Generic (PLEG): container finished" podID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerID="7365620845db54ba879f3622835dda751053aefedf606fd24aaeff794ccfed44" exitCode=0 Feb 17 16:59:03 crc kubenswrapper[4808]: I0217 16:59:03.568093 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-szhdh" event={"ID":"740e9eba-2f31-48f8-af0e-68aec31e27cf","Type":"ContainerDied","Data":"7365620845db54ba879f3622835dda751053aefedf606fd24aaeff794ccfed44"} Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.130013 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:59:04 crc kubenswrapper[4808]: E0217 16:59:04.147684 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:59:04 crc kubenswrapper[4808]: E0217 16:59:04.148653 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.313710 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czltc\" (UniqueName: \"kubernetes.io/projected/740e9eba-2f31-48f8-af0e-68aec31e27cf-kube-api-access-czltc\") pod \"740e9eba-2f31-48f8-af0e-68aec31e27cf\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.313862 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-utilities\") pod \"740e9eba-2f31-48f8-af0e-68aec31e27cf\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.313983 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-catalog-content\") pod \"740e9eba-2f31-48f8-af0e-68aec31e27cf\" (UID: \"740e9eba-2f31-48f8-af0e-68aec31e27cf\") " Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.314801 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-utilities" (OuterVolumeSpecName: "utilities") pod "740e9eba-2f31-48f8-af0e-68aec31e27cf" (UID: "740e9eba-2f31-48f8-af0e-68aec31e27cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.317290 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.327963 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/740e9eba-2f31-48f8-af0e-68aec31e27cf-kube-api-access-czltc" (OuterVolumeSpecName: "kube-api-access-czltc") pod "740e9eba-2f31-48f8-af0e-68aec31e27cf" (UID: "740e9eba-2f31-48f8-af0e-68aec31e27cf"). InnerVolumeSpecName "kube-api-access-czltc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.408985 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "740e9eba-2f31-48f8-af0e-68aec31e27cf" (UID: "740e9eba-2f31-48f8-af0e-68aec31e27cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.419417 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czltc\" (UniqueName: \"kubernetes.io/projected/740e9eba-2f31-48f8-af0e-68aec31e27cf-kube-api-access-czltc\") on node \"crc\" DevicePath \"\"" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.419446 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/740e9eba-2f31-48f8-af0e-68aec31e27cf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.586680 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-szhdh" event={"ID":"740e9eba-2f31-48f8-af0e-68aec31e27cf","Type":"ContainerDied","Data":"9e069878c6614ce22e9d278c679f49524cf425a1cd6c9df95a316782240123ee"} Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.586784 4808 scope.go:117] "RemoveContainer" containerID="7365620845db54ba879f3622835dda751053aefedf606fd24aaeff794ccfed44" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.586845 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-szhdh" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.617494 4808 scope.go:117] "RemoveContainer" containerID="5e8850501eb79a3ea1c89c761415222512c2f195ce6edc451621d50b059d2db2" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.652951 4808 scope.go:117] "RemoveContainer" containerID="0f7be76c253b421188bbb3b738a02d69e75584ea443f6d666f3927a89f0359d4" Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.656262 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-szhdh"] Feb 17 16:59:04 crc kubenswrapper[4808]: I0217 16:59:04.668802 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-szhdh"] Feb 17 16:59:05 crc kubenswrapper[4808]: I0217 16:59:05.157921 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" path="/var/lib/kubelet/pods/740e9eba-2f31-48f8-af0e-68aec31e27cf/volumes" Feb 17 16:59:12 crc kubenswrapper[4808]: I0217 16:59:12.145852 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:59:12 crc kubenswrapper[4808]: E0217 16:59:12.146669 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:59:16 crc kubenswrapper[4808]: E0217 16:59:16.149849 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:59:19 crc kubenswrapper[4808]: E0217 16:59:19.147995 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:59:24 crc kubenswrapper[4808]: I0217 16:59:24.146116 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:59:24 crc kubenswrapper[4808]: E0217 16:59:24.147320 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:59:29 crc kubenswrapper[4808]: E0217 16:59:29.149535 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:59:33 crc kubenswrapper[4808]: E0217 16:59:33.148842 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:59:37 crc kubenswrapper[4808]: I0217 16:59:37.151530 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:59:37 crc kubenswrapper[4808]: E0217 16:59:37.152196 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:59:44 crc kubenswrapper[4808]: E0217 16:59:44.148225 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 16:59:44 crc kubenswrapper[4808]: E0217 16:59:44.148643 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:59:49 crc kubenswrapper[4808]: I0217 16:59:49.147171 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 16:59:49 crc kubenswrapper[4808]: E0217 16:59:49.148182 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 16:59:55 crc kubenswrapper[4808]: E0217 16:59:55.148065 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 16:59:58 crc kubenswrapper[4808]: E0217 16:59:58.151218 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.177724 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k"] Feb 17 17:00:00 crc kubenswrapper[4808]: E0217 17:00:00.178692 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="extract-content" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.178707 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="extract-content" Feb 17 17:00:00 crc kubenswrapper[4808]: E0217 17:00:00.178749 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="extract-utilities" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.178757 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="extract-utilities" Feb 17 17:00:00 crc kubenswrapper[4808]: E0217 17:00:00.178776 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="registry-server" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.178782 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="registry-server" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.178993 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="740e9eba-2f31-48f8-af0e-68aec31e27cf" containerName="registry-server" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.179781 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.182181 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.186737 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.204204 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k"] Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.232563 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a359510-529f-4c70-8fee-5415433f1aff-config-volume\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.232706 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlwr9\" (UniqueName: \"kubernetes.io/projected/7a359510-529f-4c70-8fee-5415433f1aff-kube-api-access-jlwr9\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.232885 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a359510-529f-4c70-8fee-5415433f1aff-secret-volume\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.335274 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a359510-529f-4c70-8fee-5415433f1aff-config-volume\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.335402 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwr9\" (UniqueName: \"kubernetes.io/projected/7a359510-529f-4c70-8fee-5415433f1aff-kube-api-access-jlwr9\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.335534 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a359510-529f-4c70-8fee-5415433f1aff-secret-volume\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.336535 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a359510-529f-4c70-8fee-5415433f1aff-config-volume\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.343266 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a359510-529f-4c70-8fee-5415433f1aff-secret-volume\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.353772 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwr9\" (UniqueName: \"kubernetes.io/projected/7a359510-529f-4c70-8fee-5415433f1aff-kube-api-access-jlwr9\") pod \"collect-profiles-29522460-lvm9k\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.502231 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:00 crc kubenswrapper[4808]: I0217 17:00:00.950057 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k"] Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.028953 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk"] Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.031075 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.033033 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.034113 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.034127 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.036439 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.037423 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk"] Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.065265 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.065374 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.065469 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94ggj\" (UniqueName: \"kubernetes.io/projected/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-kube-api-access-94ggj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.168804 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.169874 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94ggj\" (UniqueName: \"kubernetes.io/projected/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-kube-api-access-94ggj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.170138 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.175453 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.175979 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.186683 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94ggj\" (UniqueName: \"kubernetes.io/projected/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-kube-api-access-94ggj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.195118 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" event={"ID":"7a359510-529f-4c70-8fee-5415433f1aff","Type":"ContainerStarted","Data":"33c65ad70d91085715bc675a67dc26448778e53315c13827ed28c79f1083adea"} Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.195162 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" event={"ID":"7a359510-529f-4c70-8fee-5415433f1aff","Type":"ContainerStarted","Data":"4d072af7d7b41f63565bf3505064037fbb281aa8cd9e503fc5a958dbac22ec0e"} Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.211058 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" podStartSLOduration=1.2110377159999999 podStartE2EDuration="1.211037716s" podCreationTimestamp="2026-02-17 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:00:01.207266022 +0000 UTC m=+3964.723625115" watchObservedRunningTime="2026-02-17 17:00:01.211037716 +0000 UTC m=+3964.727396789" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.368998 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.914524 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:00:01 crc kubenswrapper[4808]: I0217 17:00:01.918973 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk"] Feb 17 17:00:02 crc kubenswrapper[4808]: I0217 17:00:02.204691 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" event={"ID":"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd","Type":"ContainerStarted","Data":"7ccbd48b8c6ddd33e393b5cc60c189b1890685479c8bc28981b9cf1783cd1867"} Feb 17 17:00:02 crc kubenswrapper[4808]: I0217 17:00:02.206605 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" event={"ID":"7a359510-529f-4c70-8fee-5415433f1aff","Type":"ContainerDied","Data":"33c65ad70d91085715bc675a67dc26448778e53315c13827ed28c79f1083adea"} Feb 17 17:00:02 crc kubenswrapper[4808]: I0217 17:00:02.206556 4808 generic.go:334] "Generic (PLEG): container finished" podID="7a359510-529f-4c70-8fee-5415433f1aff" containerID="33c65ad70d91085715bc675a67dc26448778e53315c13827ed28c79f1083adea" exitCode=0 Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.145428 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 17:00:03 crc kubenswrapper[4808]: E0217 17:00:03.146053 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.268297 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" event={"ID":"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd","Type":"ContainerStarted","Data":"6287c9af3f8fc5a9bacd7d967c6c0711a69d46294cccb346aa34f674145f916b"} Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.296360 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" podStartSLOduration=1.663979924 podStartE2EDuration="2.29633784s" podCreationTimestamp="2026-02-17 17:00:01 +0000 UTC" firstStartedPulling="2026-02-17 17:00:01.914310793 +0000 UTC m=+3965.430669866" lastFinishedPulling="2026-02-17 17:00:02.546668709 +0000 UTC m=+3966.063027782" observedRunningTime="2026-02-17 17:00:03.286700378 +0000 UTC m=+3966.803059461" watchObservedRunningTime="2026-02-17 17:00:03.29633784 +0000 UTC m=+3966.812696913" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.743778 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.822013 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a359510-529f-4c70-8fee-5415433f1aff-config-volume\") pod \"7a359510-529f-4c70-8fee-5415433f1aff\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.822242 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a359510-529f-4c70-8fee-5415433f1aff-secret-volume\") pod \"7a359510-529f-4c70-8fee-5415433f1aff\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.822410 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlwr9\" (UniqueName: \"kubernetes.io/projected/7a359510-529f-4c70-8fee-5415433f1aff-kube-api-access-jlwr9\") pod \"7a359510-529f-4c70-8fee-5415433f1aff\" (UID: \"7a359510-529f-4c70-8fee-5415433f1aff\") " Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.823022 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a359510-529f-4c70-8fee-5415433f1aff-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a359510-529f-4c70-8fee-5415433f1aff" (UID: "7a359510-529f-4c70-8fee-5415433f1aff"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.829302 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a359510-529f-4c70-8fee-5415433f1aff-kube-api-access-jlwr9" (OuterVolumeSpecName: "kube-api-access-jlwr9") pod "7a359510-529f-4c70-8fee-5415433f1aff" (UID: "7a359510-529f-4c70-8fee-5415433f1aff"). InnerVolumeSpecName "kube-api-access-jlwr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.836809 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a359510-529f-4c70-8fee-5415433f1aff-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7a359510-529f-4c70-8fee-5415433f1aff" (UID: "7a359510-529f-4c70-8fee-5415433f1aff"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.925831 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a359510-529f-4c70-8fee-5415433f1aff-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.925874 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a359510-529f-4c70-8fee-5415433f1aff-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:03 crc kubenswrapper[4808]: I0217 17:00:03.925889 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlwr9\" (UniqueName: \"kubernetes.io/projected/7a359510-529f-4c70-8fee-5415433f1aff-kube-api-access-jlwr9\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.279415 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.279399 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522460-lvm9k" event={"ID":"7a359510-529f-4c70-8fee-5415433f1aff","Type":"ContainerDied","Data":"4d072af7d7b41f63565bf3505064037fbb281aa8cd9e503fc5a958dbac22ec0e"} Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.279539 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d072af7d7b41f63565bf3505064037fbb281aa8cd9e503fc5a958dbac22ec0e" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.326043 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh"] Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.336053 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522415-pp7nh"] Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.806463 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jrqlg"] Feb 17 17:00:04 crc kubenswrapper[4808]: E0217 17:00:04.807000 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a359510-529f-4c70-8fee-5415433f1aff" containerName="collect-profiles" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.807024 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a359510-529f-4c70-8fee-5415433f1aff" containerName="collect-profiles" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.807271 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a359510-529f-4c70-8fee-5415433f1aff" containerName="collect-profiles" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.808968 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.830895 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrqlg"] Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.861641 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxlpb\" (UniqueName: \"kubernetes.io/projected/3e83d8af-25d4-4332-921b-7f4e8b4373c6-kube-api-access-dxlpb\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.861819 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-utilities\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.862129 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-catalog-content\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.970196 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-catalog-content\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.970306 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxlpb\" (UniqueName: \"kubernetes.io/projected/3e83d8af-25d4-4332-921b-7f4e8b4373c6-kube-api-access-dxlpb\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.970383 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-utilities\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.970719 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-catalog-content\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.970785 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-utilities\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:04 crc kubenswrapper[4808]: I0217 17:00:04.995939 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxlpb\" (UniqueName: \"kubernetes.io/projected/3e83d8af-25d4-4332-921b-7f4e8b4373c6-kube-api-access-dxlpb\") pod \"redhat-marketplace-jrqlg\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:05 crc kubenswrapper[4808]: I0217 17:00:05.158964 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41f86f53-7772-428e-b916-8624c83de123" path="/var/lib/kubelet/pods/41f86f53-7772-428e-b916-8624c83de123/volumes" Feb 17 17:00:05 crc kubenswrapper[4808]: I0217 17:00:05.168305 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:05 crc kubenswrapper[4808]: I0217 17:00:05.624776 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrqlg"] Feb 17 17:00:06 crc kubenswrapper[4808]: E0217 17:00:06.270146 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:00:06 crc kubenswrapper[4808]: E0217 17:00:06.270473 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:00:06 crc kubenswrapper[4808]: E0217 17:00:06.270659 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:00:06 crc kubenswrapper[4808]: E0217 17:00:06.271857 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:00:06 crc kubenswrapper[4808]: I0217 17:00:06.298470 4808 generic.go:334] "Generic (PLEG): container finished" podID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerID="3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262" exitCode=0 Feb 17 17:00:06 crc kubenswrapper[4808]: I0217 17:00:06.298571 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrqlg" event={"ID":"3e83d8af-25d4-4332-921b-7f4e8b4373c6","Type":"ContainerDied","Data":"3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262"} Feb 17 17:00:06 crc kubenswrapper[4808]: I0217 17:00:06.298941 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrqlg" event={"ID":"3e83d8af-25d4-4332-921b-7f4e8b4373c6","Type":"ContainerStarted","Data":"ce300cc7efbbfaf7ea087a5e466967ad1bec84cde0e6b17839e7b09b820d7cd6"} Feb 17 17:00:07 crc kubenswrapper[4808]: I0217 17:00:07.312087 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrqlg" event={"ID":"3e83d8af-25d4-4332-921b-7f4e8b4373c6","Type":"ContainerStarted","Data":"76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21"} Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.328932 4808 generic.go:334] "Generic (PLEG): container finished" podID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerID="76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21" exitCode=0 Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.329056 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrqlg" event={"ID":"3e83d8af-25d4-4332-921b-7f4e8b4373c6","Type":"ContainerDied","Data":"76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21"} Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.787436 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7hsbw"] Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.789827 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.815263 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7hsbw"] Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.868621 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-catalog-content\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.868757 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-utilities\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.868813 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t7k7\" (UniqueName: \"kubernetes.io/projected/0b80afd2-f4bc-40fe-9082-9f8db573476c-kube-api-access-8t7k7\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.972633 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-catalog-content\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.972742 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-utilities\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.972779 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t7k7\" (UniqueName: \"kubernetes.io/projected/0b80afd2-f4bc-40fe-9082-9f8db573476c-kube-api-access-8t7k7\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.973815 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-catalog-content\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.973859 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-utilities\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:08 crc kubenswrapper[4808]: I0217 17:00:08.992745 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t7k7\" (UniqueName: \"kubernetes.io/projected/0b80afd2-f4bc-40fe-9082-9f8db573476c-kube-api-access-8t7k7\") pod \"redhat-operators-7hsbw\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:09 crc kubenswrapper[4808]: I0217 17:00:09.116837 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:09 crc kubenswrapper[4808]: I0217 17:00:09.349023 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrqlg" event={"ID":"3e83d8af-25d4-4332-921b-7f4e8b4373c6","Type":"ContainerStarted","Data":"1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433"} Feb 17 17:00:09 crc kubenswrapper[4808]: I0217 17:00:09.376104 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jrqlg" podStartSLOduration=2.722683836 podStartE2EDuration="5.376082628s" podCreationTimestamp="2026-02-17 17:00:04 +0000 UTC" firstStartedPulling="2026-02-17 17:00:06.300676557 +0000 UTC m=+3969.817035630" lastFinishedPulling="2026-02-17 17:00:08.954075349 +0000 UTC m=+3972.470434422" observedRunningTime="2026-02-17 17:00:09.36805007 +0000 UTC m=+3972.884409133" watchObservedRunningTime="2026-02-17 17:00:09.376082628 +0000 UTC m=+3972.892441701" Feb 17 17:00:09 crc kubenswrapper[4808]: I0217 17:00:09.664223 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7hsbw"] Feb 17 17:00:10 crc kubenswrapper[4808]: I0217 17:00:10.358230 4808 generic.go:334] "Generic (PLEG): container finished" podID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerID="96f1f271d2bd07ead3d1f83bebbdbbb97452db459ce59a3b4676fb385cc8c17e" exitCode=0 Feb 17 17:00:10 crc kubenswrapper[4808]: I0217 17:00:10.358300 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hsbw" event={"ID":"0b80afd2-f4bc-40fe-9082-9f8db573476c","Type":"ContainerDied","Data":"96f1f271d2bd07ead3d1f83bebbdbbb97452db459ce59a3b4676fb385cc8c17e"} Feb 17 17:00:10 crc kubenswrapper[4808]: I0217 17:00:10.359771 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hsbw" event={"ID":"0b80afd2-f4bc-40fe-9082-9f8db573476c","Type":"ContainerStarted","Data":"318fa89bc3edd094bbe66b4e0345273e686b3e18d39970e04a57723871357c51"} Feb 17 17:00:11 crc kubenswrapper[4808]: I0217 17:00:11.369822 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hsbw" event={"ID":"0b80afd2-f4bc-40fe-9082-9f8db573476c","Type":"ContainerStarted","Data":"36ad0d790006e5a1ec22dff95061c4149c581b4e4339f62b424674bff8ee3dea"} Feb 17 17:00:12 crc kubenswrapper[4808]: E0217 17:00:12.148028 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:00:14 crc kubenswrapper[4808]: I0217 17:00:14.401522 4808 generic.go:334] "Generic (PLEG): container finished" podID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerID="36ad0d790006e5a1ec22dff95061c4149c581b4e4339f62b424674bff8ee3dea" exitCode=0 Feb 17 17:00:14 crc kubenswrapper[4808]: I0217 17:00:14.401611 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hsbw" event={"ID":"0b80afd2-f4bc-40fe-9082-9f8db573476c","Type":"ContainerDied","Data":"36ad0d790006e5a1ec22dff95061c4149c581b4e4339f62b424674bff8ee3dea"} Feb 17 17:00:15 crc kubenswrapper[4808]: I0217 17:00:15.169024 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:15 crc kubenswrapper[4808]: I0217 17:00:15.169257 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:15 crc kubenswrapper[4808]: I0217 17:00:15.216503 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:15 crc kubenswrapper[4808]: I0217 17:00:15.414082 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hsbw" event={"ID":"0b80afd2-f4bc-40fe-9082-9f8db573476c","Type":"ContainerStarted","Data":"1708d7b0d3eb7e0941e2a134e49dd13a3649ddc50b2e62db6277f45786ecf0a9"} Feb 17 17:00:15 crc kubenswrapper[4808]: I0217 17:00:15.430639 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7hsbw" podStartSLOduration=2.979943617 podStartE2EDuration="7.430622232s" podCreationTimestamp="2026-02-17 17:00:08 +0000 UTC" firstStartedPulling="2026-02-17 17:00:10.360294485 +0000 UTC m=+3973.876653548" lastFinishedPulling="2026-02-17 17:00:14.81097309 +0000 UTC m=+3978.327332163" observedRunningTime="2026-02-17 17:00:15.429038348 +0000 UTC m=+3978.945397431" watchObservedRunningTime="2026-02-17 17:00:15.430622232 +0000 UTC m=+3978.946981305" Feb 17 17:00:15 crc kubenswrapper[4808]: I0217 17:00:15.484321 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:15 crc kubenswrapper[4808]: I0217 17:00:15.979305 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrqlg"] Feb 17 17:00:17 crc kubenswrapper[4808]: I0217 17:00:17.152039 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 17:00:17 crc kubenswrapper[4808]: E0217 17:00:17.152699 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:00:17 crc kubenswrapper[4808]: I0217 17:00:17.432700 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jrqlg" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="registry-server" containerID="cri-o://1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433" gracePeriod=2 Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.061259 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:18 crc kubenswrapper[4808]: E0217 17:00:18.147484 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.175558 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxlpb\" (UniqueName: \"kubernetes.io/projected/3e83d8af-25d4-4332-921b-7f4e8b4373c6-kube-api-access-dxlpb\") pod \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.176907 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-utilities\") pod \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.177256 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-catalog-content\") pod \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\" (UID: \"3e83d8af-25d4-4332-921b-7f4e8b4373c6\") " Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.191337 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-utilities" (OuterVolumeSpecName: "utilities") pod "3e83d8af-25d4-4332-921b-7f4e8b4373c6" (UID: "3e83d8af-25d4-4332-921b-7f4e8b4373c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.212605 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e83d8af-25d4-4332-921b-7f4e8b4373c6" (UID: "3e83d8af-25d4-4332-921b-7f4e8b4373c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.251515 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e83d8af-25d4-4332-921b-7f4e8b4373c6-kube-api-access-dxlpb" (OuterVolumeSpecName: "kube-api-access-dxlpb") pod "3e83d8af-25d4-4332-921b-7f4e8b4373c6" (UID: "3e83d8af-25d4-4332-921b-7f4e8b4373c6"). InnerVolumeSpecName "kube-api-access-dxlpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.280195 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.280234 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxlpb\" (UniqueName: \"kubernetes.io/projected/3e83d8af-25d4-4332-921b-7f4e8b4373c6-kube-api-access-dxlpb\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.280247 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e83d8af-25d4-4332-921b-7f4e8b4373c6-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.443235 4808 generic.go:334] "Generic (PLEG): container finished" podID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerID="1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433" exitCode=0 Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.443283 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrqlg" event={"ID":"3e83d8af-25d4-4332-921b-7f4e8b4373c6","Type":"ContainerDied","Data":"1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433"} Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.443316 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrqlg" event={"ID":"3e83d8af-25d4-4332-921b-7f4e8b4373c6","Type":"ContainerDied","Data":"ce300cc7efbbfaf7ea087a5e466967ad1bec84cde0e6b17839e7b09b820d7cd6"} Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.443333 4808 scope.go:117] "RemoveContainer" containerID="1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.443451 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrqlg" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.468019 4808 scope.go:117] "RemoveContainer" containerID="76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.486376 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrqlg"] Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.496606 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrqlg"] Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.509674 4808 scope.go:117] "RemoveContainer" containerID="3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.556359 4808 scope.go:117] "RemoveContainer" containerID="1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433" Feb 17 17:00:18 crc kubenswrapper[4808]: E0217 17:00:18.556961 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433\": container with ID starting with 1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433 not found: ID does not exist" containerID="1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.557005 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433"} err="failed to get container status \"1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433\": rpc error: code = NotFound desc = could not find container \"1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433\": container with ID starting with 1cf73a78abc574fcd9ab5d34937fd405d8ea74de7b2c04d9595ec6692931b433 not found: ID does not exist" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.557037 4808 scope.go:117] "RemoveContainer" containerID="76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21" Feb 17 17:00:18 crc kubenswrapper[4808]: E0217 17:00:18.561756 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21\": container with ID starting with 76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21 not found: ID does not exist" containerID="76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.561825 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21"} err="failed to get container status \"76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21\": rpc error: code = NotFound desc = could not find container \"76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21\": container with ID starting with 76f01c5c36a5224959dfdedf23a07830accee10e090dfb6a907075bc920bbd21 not found: ID does not exist" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.561877 4808 scope.go:117] "RemoveContainer" containerID="3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262" Feb 17 17:00:18 crc kubenswrapper[4808]: E0217 17:00:18.562334 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262\": container with ID starting with 3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262 not found: ID does not exist" containerID="3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.562383 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262"} err="failed to get container status \"3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262\": rpc error: code = NotFound desc = could not find container \"3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262\": container with ID starting with 3799d28c7a608a801a3f204853db0abffaef6f609e58ac97e901828d128b6262 not found: ID does not exist" Feb 17 17:00:18 crc kubenswrapper[4808]: I0217 17:00:18.646763 4808 scope.go:117] "RemoveContainer" containerID="af2c8b60da9d5276edbe2e0351b8e1093617fb76e21f063ad9744c8103bb6313" Feb 17 17:00:19 crc kubenswrapper[4808]: I0217 17:00:19.118265 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:19 crc kubenswrapper[4808]: I0217 17:00:19.118755 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:19 crc kubenswrapper[4808]: I0217 17:00:19.165076 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" path="/var/lib/kubelet/pods/3e83d8af-25d4-4332-921b-7f4e8b4373c6/volumes" Feb 17 17:00:20 crc kubenswrapper[4808]: I0217 17:00:20.614520 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7hsbw" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="registry-server" probeResult="failure" output=< Feb 17 17:00:20 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:00:20 crc kubenswrapper[4808]: > Feb 17 17:00:25 crc kubenswrapper[4808]: E0217 17:00:25.148743 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:00:28 crc kubenswrapper[4808]: I0217 17:00:28.146400 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 17:00:28 crc kubenswrapper[4808]: I0217 17:00:28.540308 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"1c02b3c7aae9a1c0d42f9daaaf983a7832eab0de1b546cc54ac3397eb20c3c2a"} Feb 17 17:00:29 crc kubenswrapper[4808]: E0217 17:00:29.179820 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:00:29 crc kubenswrapper[4808]: I0217 17:00:29.206298 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:29 crc kubenswrapper[4808]: I0217 17:00:29.265123 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:29 crc kubenswrapper[4808]: I0217 17:00:29.453825 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7hsbw"] Feb 17 17:00:30 crc kubenswrapper[4808]: I0217 17:00:30.561477 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7hsbw" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="registry-server" containerID="cri-o://1708d7b0d3eb7e0941e2a134e49dd13a3649ddc50b2e62db6277f45786ecf0a9" gracePeriod=2 Feb 17 17:00:31 crc kubenswrapper[4808]: I0217 17:00:31.581340 4808 generic.go:334] "Generic (PLEG): container finished" podID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerID="1708d7b0d3eb7e0941e2a134e49dd13a3649ddc50b2e62db6277f45786ecf0a9" exitCode=0 Feb 17 17:00:31 crc kubenswrapper[4808]: I0217 17:00:31.581461 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hsbw" event={"ID":"0b80afd2-f4bc-40fe-9082-9f8db573476c","Type":"ContainerDied","Data":"1708d7b0d3eb7e0941e2a134e49dd13a3649ddc50b2e62db6277f45786ecf0a9"} Feb 17 17:00:31 crc kubenswrapper[4808]: I0217 17:00:31.906448 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:31 crc kubenswrapper[4808]: I0217 17:00:31.996471 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-utilities\") pod \"0b80afd2-f4bc-40fe-9082-9f8db573476c\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " Feb 17 17:00:31 crc kubenswrapper[4808]: I0217 17:00:31.996613 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t7k7\" (UniqueName: \"kubernetes.io/projected/0b80afd2-f4bc-40fe-9082-9f8db573476c-kube-api-access-8t7k7\") pod \"0b80afd2-f4bc-40fe-9082-9f8db573476c\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " Feb 17 17:00:31 crc kubenswrapper[4808]: I0217 17:00:31.996769 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-catalog-content\") pod \"0b80afd2-f4bc-40fe-9082-9f8db573476c\" (UID: \"0b80afd2-f4bc-40fe-9082-9f8db573476c\") " Feb 17 17:00:31 crc kubenswrapper[4808]: I0217 17:00:31.997408 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-utilities" (OuterVolumeSpecName: "utilities") pod "0b80afd2-f4bc-40fe-9082-9f8db573476c" (UID: "0b80afd2-f4bc-40fe-9082-9f8db573476c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.012533 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b80afd2-f4bc-40fe-9082-9f8db573476c-kube-api-access-8t7k7" (OuterVolumeSpecName: "kube-api-access-8t7k7") pod "0b80afd2-f4bc-40fe-9082-9f8db573476c" (UID: "0b80afd2-f4bc-40fe-9082-9f8db573476c"). InnerVolumeSpecName "kube-api-access-8t7k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.098462 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t7k7\" (UniqueName: \"kubernetes.io/projected/0b80afd2-f4bc-40fe-9082-9f8db573476c-kube-api-access-8t7k7\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.098495 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.136853 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b80afd2-f4bc-40fe-9082-9f8db573476c" (UID: "0b80afd2-f4bc-40fe-9082-9f8db573476c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.199662 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b80afd2-f4bc-40fe-9082-9f8db573476c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.592372 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7hsbw" event={"ID":"0b80afd2-f4bc-40fe-9082-9f8db573476c","Type":"ContainerDied","Data":"318fa89bc3edd094bbe66b4e0345273e686b3e18d39970e04a57723871357c51"} Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.592447 4808 scope.go:117] "RemoveContainer" containerID="1708d7b0d3eb7e0941e2a134e49dd13a3649ddc50b2e62db6277f45786ecf0a9" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.592454 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7hsbw" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.626053 4808 scope.go:117] "RemoveContainer" containerID="36ad0d790006e5a1ec22dff95061c4149c581b4e4339f62b424674bff8ee3dea" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.662742 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7hsbw"] Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.663648 4808 scope.go:117] "RemoveContainer" containerID="96f1f271d2bd07ead3d1f83bebbdbbb97452db459ce59a3b4676fb385cc8c17e" Feb 17 17:00:32 crc kubenswrapper[4808]: I0217 17:00:32.670711 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7hsbw"] Feb 17 17:00:33 crc kubenswrapper[4808]: I0217 17:00:33.159860 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" path="/var/lib/kubelet/pods/0b80afd2-f4bc-40fe-9082-9f8db573476c/volumes" Feb 17 17:00:37 crc kubenswrapper[4808]: E0217 17:00:37.292609 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:00:37 crc kubenswrapper[4808]: E0217 17:00:37.293387 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:00:37 crc kubenswrapper[4808]: E0217 17:00:37.293638 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:00:37 crc kubenswrapper[4808]: E0217 17:00:37.294956 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:00:44 crc kubenswrapper[4808]: E0217 17:00:44.148722 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:00:45 crc kubenswrapper[4808]: I0217 17:00:45.776804 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="ade81c90-5cdf-45d4-ad2f-52a3514e1596" containerName="galera" probeResult="failure" output="command timed out" Feb 17 17:00:48 crc kubenswrapper[4808]: E0217 17:00:48.148138 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:00:57 crc kubenswrapper[4808]: E0217 17:00:57.157942 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.165051 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29522461-f5wx2"] Feb 17 17:01:00 crc kubenswrapper[4808]: E0217 17:01:00.166260 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="extract-utilities" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166289 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="extract-utilities" Feb 17 17:01:00 crc kubenswrapper[4808]: E0217 17:01:00.166310 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="extract-content" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166321 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="extract-content" Feb 17 17:01:00 crc kubenswrapper[4808]: E0217 17:01:00.166348 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="extract-content" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166361 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="extract-content" Feb 17 17:01:00 crc kubenswrapper[4808]: E0217 17:01:00.166396 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="extract-utilities" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166409 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="extract-utilities" Feb 17 17:01:00 crc kubenswrapper[4808]: E0217 17:01:00.166438 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166449 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4808]: E0217 17:01:00.166466 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166477 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166800 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e83d8af-25d4-4332-921b-7f4e8b4373c6" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.166847 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b80afd2-f4bc-40fe-9082-9f8db573476c" containerName="registry-server" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.168017 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.182006 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522461-f5wx2"] Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.360465 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-config-data\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.360548 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-fernet-keys\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.360894 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvcvk\" (UniqueName: \"kubernetes.io/projected/d443f775-9b53-4aaf-bcda-68aed8d88e84-kube-api-access-jvcvk\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.360995 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-combined-ca-bundle\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.463232 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-combined-ca-bundle\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.463408 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-config-data\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.463454 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-fernet-keys\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.463549 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvcvk\" (UniqueName: \"kubernetes.io/projected/d443f775-9b53-4aaf-bcda-68aed8d88e84-kube-api-access-jvcvk\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.470110 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-config-data\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.471422 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-combined-ca-bundle\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.472205 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-fernet-keys\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.495181 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvcvk\" (UniqueName: \"kubernetes.io/projected/d443f775-9b53-4aaf-bcda-68aed8d88e84-kube-api-access-jvcvk\") pod \"keystone-cron-29522461-f5wx2\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:00 crc kubenswrapper[4808]: I0217 17:01:00.510057 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:01 crc kubenswrapper[4808]: I0217 17:01:01.040937 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29522461-f5wx2"] Feb 17 17:01:01 crc kubenswrapper[4808]: W0217 17:01:01.044531 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd443f775_9b53_4aaf_bcda_68aed8d88e84.slice/crio-2998cde3e89c7b720e4f65d35b80963dde294a36d0acbf064b36c6b3f7621882 WatchSource:0}: Error finding container 2998cde3e89c7b720e4f65d35b80963dde294a36d0acbf064b36c6b3f7621882: Status 404 returned error can't find the container with id 2998cde3e89c7b720e4f65d35b80963dde294a36d0acbf064b36c6b3f7621882 Feb 17 17:01:01 crc kubenswrapper[4808]: E0217 17:01:01.167051 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:01:01 crc kubenswrapper[4808]: I0217 17:01:01.901651 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-f5wx2" event={"ID":"d443f775-9b53-4aaf-bcda-68aed8d88e84","Type":"ContainerStarted","Data":"006837e83c0d08aa480ea6f3d7c1d67333a0c2ed67bca87f005ebff08eb39d6a"} Feb 17 17:01:01 crc kubenswrapper[4808]: I0217 17:01:01.901702 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-f5wx2" event={"ID":"d443f775-9b53-4aaf-bcda-68aed8d88e84","Type":"ContainerStarted","Data":"2998cde3e89c7b720e4f65d35b80963dde294a36d0acbf064b36c6b3f7621882"} Feb 17 17:01:01 crc kubenswrapper[4808]: I0217 17:01:01.925301 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29522461-f5wx2" podStartSLOduration=1.92528509 podStartE2EDuration="1.92528509s" podCreationTimestamp="2026-02-17 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-17 17:01:01.915034241 +0000 UTC m=+4025.431393314" watchObservedRunningTime="2026-02-17 17:01:01.92528509 +0000 UTC m=+4025.441644153" Feb 17 17:01:03 crc kubenswrapper[4808]: I0217 17:01:03.921476 4808 generic.go:334] "Generic (PLEG): container finished" podID="d443f775-9b53-4aaf-bcda-68aed8d88e84" containerID="006837e83c0d08aa480ea6f3d7c1d67333a0c2ed67bca87f005ebff08eb39d6a" exitCode=0 Feb 17 17:01:03 crc kubenswrapper[4808]: I0217 17:01:03.921602 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-f5wx2" event={"ID":"d443f775-9b53-4aaf-bcda-68aed8d88e84","Type":"ContainerDied","Data":"006837e83c0d08aa480ea6f3d7c1d67333a0c2ed67bca87f005ebff08eb39d6a"} Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.357906 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.470338 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvcvk\" (UniqueName: \"kubernetes.io/projected/d443f775-9b53-4aaf-bcda-68aed8d88e84-kube-api-access-jvcvk\") pod \"d443f775-9b53-4aaf-bcda-68aed8d88e84\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.470765 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-config-data\") pod \"d443f775-9b53-4aaf-bcda-68aed8d88e84\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.470955 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-fernet-keys\") pod \"d443f775-9b53-4aaf-bcda-68aed8d88e84\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.471144 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-combined-ca-bundle\") pod \"d443f775-9b53-4aaf-bcda-68aed8d88e84\" (UID: \"d443f775-9b53-4aaf-bcda-68aed8d88e84\") " Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.476744 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d443f775-9b53-4aaf-bcda-68aed8d88e84-kube-api-access-jvcvk" (OuterVolumeSpecName: "kube-api-access-jvcvk") pod "d443f775-9b53-4aaf-bcda-68aed8d88e84" (UID: "d443f775-9b53-4aaf-bcda-68aed8d88e84"). InnerVolumeSpecName "kube-api-access-jvcvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.479240 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d443f775-9b53-4aaf-bcda-68aed8d88e84" (UID: "d443f775-9b53-4aaf-bcda-68aed8d88e84"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.498689 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d443f775-9b53-4aaf-bcda-68aed8d88e84" (UID: "d443f775-9b53-4aaf-bcda-68aed8d88e84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.519797 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-config-data" (OuterVolumeSpecName: "config-data") pod "d443f775-9b53-4aaf-bcda-68aed8d88e84" (UID: "d443f775-9b53-4aaf-bcda-68aed8d88e84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.574722 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvcvk\" (UniqueName: \"kubernetes.io/projected/d443f775-9b53-4aaf-bcda-68aed8d88e84-kube-api-access-jvcvk\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.574768 4808 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-config-data\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.574783 4808 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.574793 4808 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d443f775-9b53-4aaf-bcda-68aed8d88e84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.938361 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29522461-f5wx2" event={"ID":"d443f775-9b53-4aaf-bcda-68aed8d88e84","Type":"ContainerDied","Data":"2998cde3e89c7b720e4f65d35b80963dde294a36d0acbf064b36c6b3f7621882"} Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.938715 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2998cde3e89c7b720e4f65d35b80963dde294a36d0acbf064b36c6b3f7621882" Feb 17 17:01:05 crc kubenswrapper[4808]: I0217 17:01:05.938436 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29522461-f5wx2" Feb 17 17:01:12 crc kubenswrapper[4808]: E0217 17:01:12.147800 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:01:13 crc kubenswrapper[4808]: E0217 17:01:13.147000 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:01:24 crc kubenswrapper[4808]: E0217 17:01:24.147406 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:01:25 crc kubenswrapper[4808]: E0217 17:01:25.147899 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:01:37 crc kubenswrapper[4808]: E0217 17:01:37.156144 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:01:38 crc kubenswrapper[4808]: E0217 17:01:38.148698 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:01:51 crc kubenswrapper[4808]: E0217 17:01:51.148698 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:01:52 crc kubenswrapper[4808]: E0217 17:01:52.149479 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:02:00 crc kubenswrapper[4808]: E0217 17:02:00.184069 4808 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.039s" Feb 17 17:02:03 crc kubenswrapper[4808]: E0217 17:02:03.149045 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:02:03 crc kubenswrapper[4808]: E0217 17:02:03.149045 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:02:16 crc kubenswrapper[4808]: E0217 17:02:16.149441 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:02:17 crc kubenswrapper[4808]: E0217 17:02:17.159728 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:02:29 crc kubenswrapper[4808]: E0217 17:02:29.149065 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:02:30 crc kubenswrapper[4808]: E0217 17:02:30.147649 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:02:43 crc kubenswrapper[4808]: E0217 17:02:43.148689 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:02:44 crc kubenswrapper[4808]: E0217 17:02:44.147421 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:02:51 crc kubenswrapper[4808]: I0217 17:02:51.591987 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:02:51 crc kubenswrapper[4808]: I0217 17:02:51.593534 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:02:55 crc kubenswrapper[4808]: E0217 17:02:55.148508 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:02:58 crc kubenswrapper[4808]: E0217 17:02:58.149851 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:03:10 crc kubenswrapper[4808]: E0217 17:03:10.148954 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.808217 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sx45k"] Feb 17 17:03:10 crc kubenswrapper[4808]: E0217 17:03:10.809008 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d443f775-9b53-4aaf-bcda-68aed8d88e84" containerName="keystone-cron" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.809038 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d443f775-9b53-4aaf-bcda-68aed8d88e84" containerName="keystone-cron" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.809366 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d443f775-9b53-4aaf-bcda-68aed8d88e84" containerName="keystone-cron" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.812040 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.848719 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sx45k"] Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.850592 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-catalog-content\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.850739 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-utilities\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.850795 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx6bw\" (UniqueName: \"kubernetes.io/projected/f017987c-650c-47b4-a33f-3ab1dfb8c281-kube-api-access-gx6bw\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.953353 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-utilities\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.953452 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx6bw\" (UniqueName: \"kubernetes.io/projected/f017987c-650c-47b4-a33f-3ab1dfb8c281-kube-api-access-gx6bw\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.953783 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-catalog-content\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.954508 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-utilities\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.954774 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-catalog-content\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:10 crc kubenswrapper[4808]: I0217 17:03:10.989130 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx6bw\" (UniqueName: \"kubernetes.io/projected/f017987c-650c-47b4-a33f-3ab1dfb8c281-kube-api-access-gx6bw\") pod \"community-operators-sx45k\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:11 crc kubenswrapper[4808]: I0217 17:03:11.144485 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:11 crc kubenswrapper[4808]: E0217 17:03:11.156818 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:03:12 crc kubenswrapper[4808]: I0217 17:03:12.280209 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sx45k"] Feb 17 17:03:12 crc kubenswrapper[4808]: I0217 17:03:12.987815 4808 generic.go:334] "Generic (PLEG): container finished" podID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerID="f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f" exitCode=0 Feb 17 17:03:12 crc kubenswrapper[4808]: I0217 17:03:12.987904 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx45k" event={"ID":"f017987c-650c-47b4-a33f-3ab1dfb8c281","Type":"ContainerDied","Data":"f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f"} Feb 17 17:03:12 crc kubenswrapper[4808]: I0217 17:03:12.988141 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx45k" event={"ID":"f017987c-650c-47b4-a33f-3ab1dfb8c281","Type":"ContainerStarted","Data":"59455c074c8369d9c1bdabb7113ee733d5f53d53a6ad636a052a2f8f11ed7c86"} Feb 17 17:03:14 crc kubenswrapper[4808]: I0217 17:03:14.001315 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx45k" event={"ID":"f017987c-650c-47b4-a33f-3ab1dfb8c281","Type":"ContainerStarted","Data":"7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09"} Feb 17 17:03:15 crc kubenswrapper[4808]: I0217 17:03:15.012465 4808 generic.go:334] "Generic (PLEG): container finished" podID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerID="7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09" exitCode=0 Feb 17 17:03:15 crc kubenswrapper[4808]: I0217 17:03:15.012510 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx45k" event={"ID":"f017987c-650c-47b4-a33f-3ab1dfb8c281","Type":"ContainerDied","Data":"7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09"} Feb 17 17:03:16 crc kubenswrapper[4808]: I0217 17:03:16.023737 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx45k" event={"ID":"f017987c-650c-47b4-a33f-3ab1dfb8c281","Type":"ContainerStarted","Data":"8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc"} Feb 17 17:03:16 crc kubenswrapper[4808]: I0217 17:03:16.040295 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sx45k" podStartSLOduration=3.3967107690000002 podStartE2EDuration="6.040276864s" podCreationTimestamp="2026-02-17 17:03:10 +0000 UTC" firstStartedPulling="2026-02-17 17:03:12.990090933 +0000 UTC m=+4156.506450006" lastFinishedPulling="2026-02-17 17:03:15.633657018 +0000 UTC m=+4159.150016101" observedRunningTime="2026-02-17 17:03:16.038457474 +0000 UTC m=+4159.554816577" watchObservedRunningTime="2026-02-17 17:03:16.040276864 +0000 UTC m=+4159.556635947" Feb 17 17:03:21 crc kubenswrapper[4808]: I0217 17:03:21.145738 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:21 crc kubenswrapper[4808]: I0217 17:03:21.146299 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:21 crc kubenswrapper[4808]: I0217 17:03:21.193244 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:21 crc kubenswrapper[4808]: I0217 17:03:21.592796 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:03:21 crc kubenswrapper[4808]: I0217 17:03:21.592885 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:03:22 crc kubenswrapper[4808]: I0217 17:03:22.118117 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:22 crc kubenswrapper[4808]: E0217 17:03:22.147991 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:03:22 crc kubenswrapper[4808]: I0217 17:03:22.174082 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sx45k"] Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.106416 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sx45k" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="registry-server" containerID="cri-o://8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc" gracePeriod=2 Feb 17 17:03:24 crc kubenswrapper[4808]: E0217 17:03:24.146813 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.746164 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.852356 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx6bw\" (UniqueName: \"kubernetes.io/projected/f017987c-650c-47b4-a33f-3ab1dfb8c281-kube-api-access-gx6bw\") pod \"f017987c-650c-47b4-a33f-3ab1dfb8c281\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.852460 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-utilities\") pod \"f017987c-650c-47b4-a33f-3ab1dfb8c281\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.852657 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-catalog-content\") pod \"f017987c-650c-47b4-a33f-3ab1dfb8c281\" (UID: \"f017987c-650c-47b4-a33f-3ab1dfb8c281\") " Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.854051 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-utilities" (OuterVolumeSpecName: "utilities") pod "f017987c-650c-47b4-a33f-3ab1dfb8c281" (UID: "f017987c-650c-47b4-a33f-3ab1dfb8c281"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.857866 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f017987c-650c-47b4-a33f-3ab1dfb8c281-kube-api-access-gx6bw" (OuterVolumeSpecName: "kube-api-access-gx6bw") pod "f017987c-650c-47b4-a33f-3ab1dfb8c281" (UID: "f017987c-650c-47b4-a33f-3ab1dfb8c281"). InnerVolumeSpecName "kube-api-access-gx6bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.955126 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx6bw\" (UniqueName: \"kubernetes.io/projected/f017987c-650c-47b4-a33f-3ab1dfb8c281-kube-api-access-gx6bw\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:24 crc kubenswrapper[4808]: I0217 17:03:24.955159 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.118286 4808 generic.go:334] "Generic (PLEG): container finished" podID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerID="8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc" exitCode=0 Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.118347 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx45k" event={"ID":"f017987c-650c-47b4-a33f-3ab1dfb8c281","Type":"ContainerDied","Data":"8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc"} Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.118369 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sx45k" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.118398 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sx45k" event={"ID":"f017987c-650c-47b4-a33f-3ab1dfb8c281","Type":"ContainerDied","Data":"59455c074c8369d9c1bdabb7113ee733d5f53d53a6ad636a052a2f8f11ed7c86"} Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.118417 4808 scope.go:117] "RemoveContainer" containerID="8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.145055 4808 scope.go:117] "RemoveContainer" containerID="7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.168267 4808 scope.go:117] "RemoveContainer" containerID="f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.174752 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f017987c-650c-47b4-a33f-3ab1dfb8c281" (UID: "f017987c-650c-47b4-a33f-3ab1dfb8c281"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.251165 4808 scope.go:117] "RemoveContainer" containerID="8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc" Feb 17 17:03:25 crc kubenswrapper[4808]: E0217 17:03:25.252238 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc\": container with ID starting with 8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc not found: ID does not exist" containerID="8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.252278 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc"} err="failed to get container status \"8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc\": rpc error: code = NotFound desc = could not find container \"8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc\": container with ID starting with 8894e31e04c0172f7d7f363415fe9ef78ac9e3fef99150ff177cb908671993cc not found: ID does not exist" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.252326 4808 scope.go:117] "RemoveContainer" containerID="7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09" Feb 17 17:03:25 crc kubenswrapper[4808]: E0217 17:03:25.252876 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09\": container with ID starting with 7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09 not found: ID does not exist" containerID="7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.252931 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09"} err="failed to get container status \"7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09\": rpc error: code = NotFound desc = could not find container \"7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09\": container with ID starting with 7f75761c4ebd95e9d96977aa4e7c82db76794278f1710e9142cc48d27aa32c09 not found: ID does not exist" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.252968 4808 scope.go:117] "RemoveContainer" containerID="f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f" Feb 17 17:03:25 crc kubenswrapper[4808]: E0217 17:03:25.253354 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f\": container with ID starting with f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f not found: ID does not exist" containerID="f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.253390 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f"} err="failed to get container status \"f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f\": rpc error: code = NotFound desc = could not find container \"f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f\": container with ID starting with f57848df42df8a0a7bedb5e002dc8de9f940f80a89cff87d4a3a68a99da5540f not found: ID does not exist" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.261994 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f017987c-650c-47b4-a33f-3ab1dfb8c281-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.458366 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sx45k"] Feb 17 17:03:25 crc kubenswrapper[4808]: I0217 17:03:25.473782 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sx45k"] Feb 17 17:03:27 crc kubenswrapper[4808]: I0217 17:03:27.160355 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" path="/var/lib/kubelet/pods/f017987c-650c-47b4-a33f-3ab1dfb8c281/volumes" Feb 17 17:03:33 crc kubenswrapper[4808]: E0217 17:03:33.147595 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:03:37 crc kubenswrapper[4808]: E0217 17:03:37.168725 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:03:44 crc kubenswrapper[4808]: E0217 17:03:44.148502 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:03:51 crc kubenswrapper[4808]: E0217 17:03:51.148114 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:03:51 crc kubenswrapper[4808]: I0217 17:03:51.592621 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:03:51 crc kubenswrapper[4808]: I0217 17:03:51.592971 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:03:51 crc kubenswrapper[4808]: I0217 17:03:51.593021 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 17:03:51 crc kubenswrapper[4808]: I0217 17:03:51.593820 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c02b3c7aae9a1c0d42f9daaaf983a7832eab0de1b546cc54ac3397eb20c3c2a"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:03:51 crc kubenswrapper[4808]: I0217 17:03:51.593883 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://1c02b3c7aae9a1c0d42f9daaaf983a7832eab0de1b546cc54ac3397eb20c3c2a" gracePeriod=600 Feb 17 17:03:52 crc kubenswrapper[4808]: I0217 17:03:52.461202 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="1c02b3c7aae9a1c0d42f9daaaf983a7832eab0de1b546cc54ac3397eb20c3c2a" exitCode=0 Feb 17 17:03:52 crc kubenswrapper[4808]: I0217 17:03:52.461293 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"1c02b3c7aae9a1c0d42f9daaaf983a7832eab0de1b546cc54ac3397eb20c3c2a"} Feb 17 17:03:52 crc kubenswrapper[4808]: I0217 17:03:52.461559 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d"} Feb 17 17:03:52 crc kubenswrapper[4808]: I0217 17:03:52.461644 4808 scope.go:117] "RemoveContainer" containerID="7fbe8df1c68f978d3698bd74ae49612c95a40d103c6fa3bdaa17006e991ad2e5" Feb 17 17:03:57 crc kubenswrapper[4808]: E0217 17:03:57.162256 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:04:04 crc kubenswrapper[4808]: E0217 17:04:04.148288 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:04:10 crc kubenswrapper[4808]: E0217 17:04:10.148974 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:04:17 crc kubenswrapper[4808]: E0217 17:04:17.155460 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:04:25 crc kubenswrapper[4808]: E0217 17:04:25.149363 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:04:30 crc kubenswrapper[4808]: E0217 17:04:30.149192 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:04:36 crc kubenswrapper[4808]: E0217 17:04:36.147708 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:04:42 crc kubenswrapper[4808]: E0217 17:04:42.148494 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:04:50 crc kubenswrapper[4808]: E0217 17:04:50.149835 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:04:57 crc kubenswrapper[4808]: E0217 17:04:57.164547 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:05:04 crc kubenswrapper[4808]: E0217 17:05:04.148649 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:05:10 crc kubenswrapper[4808]: I0217 17:05:10.162422 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:05:10 crc kubenswrapper[4808]: E0217 17:05:10.269165 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:05:10 crc kubenswrapper[4808]: E0217 17:05:10.269224 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:05:10 crc kubenswrapper[4808]: E0217 17:05:10.269366 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:05:10 crc kubenswrapper[4808]: E0217 17:05:10.270684 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:05:17 crc kubenswrapper[4808]: E0217 17:05:17.166045 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:05:24 crc kubenswrapper[4808]: E0217 17:05:24.150001 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:05:30 crc kubenswrapper[4808]: E0217 17:05:30.147642 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:05:39 crc kubenswrapper[4808]: E0217 17:05:39.148022 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:05:43 crc kubenswrapper[4808]: E0217 17:05:43.279537 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:05:43 crc kubenswrapper[4808]: E0217 17:05:43.280207 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:05:43 crc kubenswrapper[4808]: E0217 17:05:43.280361 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:05:43 crc kubenswrapper[4808]: E0217 17:05:43.281623 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:05:53 crc kubenswrapper[4808]: E0217 17:05:53.147427 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:05:58 crc kubenswrapper[4808]: E0217 17:05:58.149511 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:06:08 crc kubenswrapper[4808]: E0217 17:06:08.148726 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:06:13 crc kubenswrapper[4808]: E0217 17:06:13.156118 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:06:17 crc kubenswrapper[4808]: I0217 17:06:17.938106 4808 generic.go:334] "Generic (PLEG): container finished" podID="6fa90ca1-9ae4-4cce-a41f-640f2629ccfd" containerID="6287c9af3f8fc5a9bacd7d967c6c0711a69d46294cccb346aa34f674145f916b" exitCode=2 Feb 17 17:06:17 crc kubenswrapper[4808]: I0217 17:06:17.938151 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" event={"ID":"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd","Type":"ContainerDied","Data":"6287c9af3f8fc5a9bacd7d967c6c0711a69d46294cccb346aa34f674145f916b"} Feb 17 17:06:19 crc kubenswrapper[4808]: I0217 17:06:19.878147 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:06:19 crc kubenswrapper[4808]: I0217 17:06:19.962038 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" event={"ID":"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd","Type":"ContainerDied","Data":"7ccbd48b8c6ddd33e393b5cc60c189b1890685479c8bc28981b9cf1783cd1867"} Feb 17 17:06:19 crc kubenswrapper[4808]: I0217 17:06:19.962114 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ccbd48b8c6ddd33e393b5cc60c189b1890685479c8bc28981b9cf1783cd1867" Feb 17 17:06:19 crc kubenswrapper[4808]: I0217 17:06:19.962118 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk" Feb 17 17:06:19 crc kubenswrapper[4808]: I0217 17:06:19.993281 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-inventory\") pod \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " Feb 17 17:06:19 crc kubenswrapper[4808]: I0217 17:06:19.993419 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94ggj\" (UniqueName: \"kubernetes.io/projected/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-kube-api-access-94ggj\") pod \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " Feb 17 17:06:19 crc kubenswrapper[4808]: I0217 17:06:19.993496 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-ssh-key-openstack-edpm-ipam\") pod \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\" (UID: \"6fa90ca1-9ae4-4cce-a41f-640f2629ccfd\") " Feb 17 17:06:20 crc kubenswrapper[4808]: I0217 17:06:20.000185 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-kube-api-access-94ggj" (OuterVolumeSpecName: "kube-api-access-94ggj") pod "6fa90ca1-9ae4-4cce-a41f-640f2629ccfd" (UID: "6fa90ca1-9ae4-4cce-a41f-640f2629ccfd"). InnerVolumeSpecName "kube-api-access-94ggj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:06:20 crc kubenswrapper[4808]: I0217 17:06:20.036841 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-inventory" (OuterVolumeSpecName: "inventory") pod "6fa90ca1-9ae4-4cce-a41f-640f2629ccfd" (UID: "6fa90ca1-9ae4-4cce-a41f-640f2629ccfd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:06:20 crc kubenswrapper[4808]: I0217 17:06:20.053630 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6fa90ca1-9ae4-4cce-a41f-640f2629ccfd" (UID: "6fa90ca1-9ae4-4cce-a41f-640f2629ccfd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:06:20 crc kubenswrapper[4808]: I0217 17:06:20.096388 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94ggj\" (UniqueName: \"kubernetes.io/projected/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-kube-api-access-94ggj\") on node \"crc\" DevicePath \"\"" Feb 17 17:06:20 crc kubenswrapper[4808]: I0217 17:06:20.096422 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:06:20 crc kubenswrapper[4808]: I0217 17:06:20.096437 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fa90ca1-9ae4-4cce-a41f-640f2629ccfd-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:06:21 crc kubenswrapper[4808]: E0217 17:06:21.148143 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:06:21 crc kubenswrapper[4808]: I0217 17:06:21.592599 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:06:21 crc kubenswrapper[4808]: I0217 17:06:21.592685 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:06:24 crc kubenswrapper[4808]: E0217 17:06:24.151185 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:06:33 crc kubenswrapper[4808]: E0217 17:06:33.148014 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:06:36 crc kubenswrapper[4808]: E0217 17:06:36.148860 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:06:44 crc kubenswrapper[4808]: E0217 17:06:44.148593 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:06:49 crc kubenswrapper[4808]: E0217 17:06:49.148736 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:06:51 crc kubenswrapper[4808]: I0217 17:06:51.592664 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:06:51 crc kubenswrapper[4808]: I0217 17:06:51.593074 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:06:57 crc kubenswrapper[4808]: E0217 17:06:57.155340 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:07:01 crc kubenswrapper[4808]: E0217 17:07:01.148106 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:07:12 crc kubenswrapper[4808]: E0217 17:07:12.148429 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:07:12 crc kubenswrapper[4808]: E0217 17:07:12.148564 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:07:21 crc kubenswrapper[4808]: I0217 17:07:21.591922 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:07:21 crc kubenswrapper[4808]: I0217 17:07:21.592431 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:07:21 crc kubenswrapper[4808]: I0217 17:07:21.592471 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 17:07:21 crc kubenswrapper[4808]: I0217 17:07:21.593033 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:07:21 crc kubenswrapper[4808]: I0217 17:07:21.593093 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" gracePeriod=600 Feb 17 17:07:21 crc kubenswrapper[4808]: E0217 17:07:21.721541 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:07:22 crc kubenswrapper[4808]: I0217 17:07:22.557530 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" exitCode=0 Feb 17 17:07:22 crc kubenswrapper[4808]: I0217 17:07:22.557595 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d"} Feb 17 17:07:22 crc kubenswrapper[4808]: I0217 17:07:22.557876 4808 scope.go:117] "RemoveContainer" containerID="1c02b3c7aae9a1c0d42f9daaaf983a7832eab0de1b546cc54ac3397eb20c3c2a" Feb 17 17:07:22 crc kubenswrapper[4808]: I0217 17:07:22.558729 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:07:22 crc kubenswrapper[4808]: E0217 17:07:22.559122 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:07:24 crc kubenswrapper[4808]: E0217 17:07:24.147746 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:07:26 crc kubenswrapper[4808]: E0217 17:07:26.147834 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:07:34 crc kubenswrapper[4808]: I0217 17:07:34.146705 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:07:34 crc kubenswrapper[4808]: E0217 17:07:34.147522 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:07:38 crc kubenswrapper[4808]: E0217 17:07:38.149010 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:07:39 crc kubenswrapper[4808]: E0217 17:07:39.147286 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:07:48 crc kubenswrapper[4808]: I0217 17:07:48.146281 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:07:48 crc kubenswrapper[4808]: E0217 17:07:48.147307 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:07:51 crc kubenswrapper[4808]: E0217 17:07:51.148883 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:07:54 crc kubenswrapper[4808]: E0217 17:07:54.148611 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:08:02 crc kubenswrapper[4808]: I0217 17:08:02.146912 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:08:02 crc kubenswrapper[4808]: E0217 17:08:02.148049 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:08:03 crc kubenswrapper[4808]: E0217 17:08:03.149718 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:08:05 crc kubenswrapper[4808]: E0217 17:08:05.149082 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:08:15 crc kubenswrapper[4808]: E0217 17:08:15.148817 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:08:16 crc kubenswrapper[4808]: I0217 17:08:16.145396 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:08:16 crc kubenswrapper[4808]: E0217 17:08:16.145954 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:08:19 crc kubenswrapper[4808]: E0217 17:08:19.148714 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:08:26 crc kubenswrapper[4808]: E0217 17:08:26.149721 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:08:28 crc kubenswrapper[4808]: I0217 17:08:28.145755 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:08:28 crc kubenswrapper[4808]: E0217 17:08:28.147172 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:08:30 crc kubenswrapper[4808]: E0217 17:08:30.147938 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:08:40 crc kubenswrapper[4808]: E0217 17:08:40.147564 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:08:42 crc kubenswrapper[4808]: I0217 17:08:42.145922 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:08:42 crc kubenswrapper[4808]: E0217 17:08:42.146648 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:08:45 crc kubenswrapper[4808]: E0217 17:08:45.150931 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:08:52 crc kubenswrapper[4808]: E0217 17:08:52.148139 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:08:53 crc kubenswrapper[4808]: I0217 17:08:53.146900 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:08:53 crc kubenswrapper[4808]: E0217 17:08:53.147539 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:08:59 crc kubenswrapper[4808]: E0217 17:08:59.149396 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:09:05 crc kubenswrapper[4808]: E0217 17:09:05.148722 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:09:08 crc kubenswrapper[4808]: I0217 17:09:08.147071 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:09:08 crc kubenswrapper[4808]: E0217 17:09:08.148210 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:09:10 crc kubenswrapper[4808]: E0217 17:09:10.148835 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:09:18 crc kubenswrapper[4808]: E0217 17:09:18.147951 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:09:20 crc kubenswrapper[4808]: I0217 17:09:20.147175 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:09:20 crc kubenswrapper[4808]: E0217 17:09:20.148090 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:09:22 crc kubenswrapper[4808]: E0217 17:09:22.149384 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.633315 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cmmcg"] Feb 17 17:09:24 crc kubenswrapper[4808]: E0217 17:09:24.634138 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="extract-content" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.634609 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="extract-content" Feb 17 17:09:24 crc kubenswrapper[4808]: E0217 17:09:24.634629 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa90ca1-9ae4-4cce-a41f-640f2629ccfd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.634640 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa90ca1-9ae4-4cce-a41f-640f2629ccfd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:09:24 crc kubenswrapper[4808]: E0217 17:09:24.634659 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="registry-server" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.634669 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="registry-server" Feb 17 17:09:24 crc kubenswrapper[4808]: E0217 17:09:24.634703 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="extract-utilities" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.634713 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="extract-utilities" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.634975 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="f017987c-650c-47b4-a33f-3ab1dfb8c281" containerName="registry-server" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.635004 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa90ca1-9ae4-4cce-a41f-640f2629ccfd" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.636924 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.649178 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmmcg"] Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.752444 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-catalog-content\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.752604 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-utilities\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.752654 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l87z\" (UniqueName: \"kubernetes.io/projected/550853e0-a7b5-406d-bb66-8d36cb6f5f68-kube-api-access-4l87z\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.854736 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l87z\" (UniqueName: \"kubernetes.io/projected/550853e0-a7b5-406d-bb66-8d36cb6f5f68-kube-api-access-4l87z\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.854896 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-catalog-content\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.854992 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-utilities\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.855479 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-catalog-content\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.855631 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-utilities\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.874681 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l87z\" (UniqueName: \"kubernetes.io/projected/550853e0-a7b5-406d-bb66-8d36cb6f5f68-kube-api-access-4l87z\") pod \"certified-operators-cmmcg\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:24 crc kubenswrapper[4808]: I0217 17:09:24.957817 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:25 crc kubenswrapper[4808]: I0217 17:09:25.460215 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmmcg"] Feb 17 17:09:25 crc kubenswrapper[4808]: I0217 17:09:25.802004 4808 generic.go:334] "Generic (PLEG): container finished" podID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerID="95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006" exitCode=0 Feb 17 17:09:25 crc kubenswrapper[4808]: I0217 17:09:25.802199 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmmcg" event={"ID":"550853e0-a7b5-406d-bb66-8d36cb6f5f68","Type":"ContainerDied","Data":"95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006"} Feb 17 17:09:25 crc kubenswrapper[4808]: I0217 17:09:25.802358 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmmcg" event={"ID":"550853e0-a7b5-406d-bb66-8d36cb6f5f68","Type":"ContainerStarted","Data":"ccb25eff53406cd3afddc36977de47347d5a7b3c88fba33ec6d8a31f7f5dacae"} Feb 17 17:09:26 crc kubenswrapper[4808]: I0217 17:09:26.812118 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmmcg" event={"ID":"550853e0-a7b5-406d-bb66-8d36cb6f5f68","Type":"ContainerStarted","Data":"94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd"} Feb 17 17:09:27 crc kubenswrapper[4808]: I0217 17:09:27.825997 4808 generic.go:334] "Generic (PLEG): container finished" podID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerID="94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd" exitCode=0 Feb 17 17:09:27 crc kubenswrapper[4808]: I0217 17:09:27.826124 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmmcg" event={"ID":"550853e0-a7b5-406d-bb66-8d36cb6f5f68","Type":"ContainerDied","Data":"94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd"} Feb 17 17:09:28 crc kubenswrapper[4808]: I0217 17:09:28.840492 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmmcg" event={"ID":"550853e0-a7b5-406d-bb66-8d36cb6f5f68","Type":"ContainerStarted","Data":"1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee"} Feb 17 17:09:28 crc kubenswrapper[4808]: I0217 17:09:28.861789 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cmmcg" podStartSLOduration=2.433383673 podStartE2EDuration="4.8617717s" podCreationTimestamp="2026-02-17 17:09:24 +0000 UTC" firstStartedPulling="2026-02-17 17:09:25.81439615 +0000 UTC m=+4529.330755223" lastFinishedPulling="2026-02-17 17:09:28.242784177 +0000 UTC m=+4531.759143250" observedRunningTime="2026-02-17 17:09:28.854491893 +0000 UTC m=+4532.370850976" watchObservedRunningTime="2026-02-17 17:09:28.8617717 +0000 UTC m=+4532.378130773" Feb 17 17:09:32 crc kubenswrapper[4808]: E0217 17:09:32.148773 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:09:33 crc kubenswrapper[4808]: I0217 17:09:33.146346 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:09:33 crc kubenswrapper[4808]: E0217 17:09:33.146669 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:09:33 crc kubenswrapper[4808]: E0217 17:09:33.148196 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:09:34 crc kubenswrapper[4808]: I0217 17:09:34.958619 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:34 crc kubenswrapper[4808]: I0217 17:09:34.958970 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:35 crc kubenswrapper[4808]: I0217 17:09:35.003858 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:35 crc kubenswrapper[4808]: I0217 17:09:35.986289 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:36 crc kubenswrapper[4808]: I0217 17:09:36.039403 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmmcg"] Feb 17 17:09:37 crc kubenswrapper[4808]: I0217 17:09:37.953370 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cmmcg" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="registry-server" containerID="cri-o://1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee" gracePeriod=2 Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.621898 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.642306 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l87z\" (UniqueName: \"kubernetes.io/projected/550853e0-a7b5-406d-bb66-8d36cb6f5f68-kube-api-access-4l87z\") pod \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.642548 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-catalog-content\") pod \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.642719 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-utilities\") pod \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\" (UID: \"550853e0-a7b5-406d-bb66-8d36cb6f5f68\") " Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.643993 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-utilities" (OuterVolumeSpecName: "utilities") pod "550853e0-a7b5-406d-bb66-8d36cb6f5f68" (UID: "550853e0-a7b5-406d-bb66-8d36cb6f5f68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.656915 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/550853e0-a7b5-406d-bb66-8d36cb6f5f68-kube-api-access-4l87z" (OuterVolumeSpecName: "kube-api-access-4l87z") pod "550853e0-a7b5-406d-bb66-8d36cb6f5f68" (UID: "550853e0-a7b5-406d-bb66-8d36cb6f5f68"). InnerVolumeSpecName "kube-api-access-4l87z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.745935 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l87z\" (UniqueName: \"kubernetes.io/projected/550853e0-a7b5-406d-bb66-8d36cb6f5f68-kube-api-access-4l87z\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.745974 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.747917 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "550853e0-a7b5-406d-bb66-8d36cb6f5f68" (UID: "550853e0-a7b5-406d-bb66-8d36cb6f5f68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.849349 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/550853e0-a7b5-406d-bb66-8d36cb6f5f68-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.963893 4808 generic.go:334] "Generic (PLEG): container finished" podID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerID="1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee" exitCode=0 Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.963946 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmmcg" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.963946 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmmcg" event={"ID":"550853e0-a7b5-406d-bb66-8d36cb6f5f68","Type":"ContainerDied","Data":"1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee"} Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.964773 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmmcg" event={"ID":"550853e0-a7b5-406d-bb66-8d36cb6f5f68","Type":"ContainerDied","Data":"ccb25eff53406cd3afddc36977de47347d5a7b3c88fba33ec6d8a31f7f5dacae"} Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.964796 4808 scope.go:117] "RemoveContainer" containerID="1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee" Feb 17 17:09:38 crc kubenswrapper[4808]: I0217 17:09:38.991157 4808 scope.go:117] "RemoveContainer" containerID="94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.014312 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmmcg"] Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.024961 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cmmcg"] Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.031598 4808 scope.go:117] "RemoveContainer" containerID="95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.077526 4808 scope.go:117] "RemoveContainer" containerID="1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee" Feb 17 17:09:39 crc kubenswrapper[4808]: E0217 17:09:39.077932 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee\": container with ID starting with 1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee not found: ID does not exist" containerID="1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.077977 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee"} err="failed to get container status \"1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee\": rpc error: code = NotFound desc = could not find container \"1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee\": container with ID starting with 1a56b8819cd1fca726eb4c3fcdf5e0ccd7077e7a706d8ae0b0fe5468028f65ee not found: ID does not exist" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.078003 4808 scope.go:117] "RemoveContainer" containerID="94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd" Feb 17 17:09:39 crc kubenswrapper[4808]: E0217 17:09:39.078423 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd\": container with ID starting with 94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd not found: ID does not exist" containerID="94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.078540 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd"} err="failed to get container status \"94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd\": rpc error: code = NotFound desc = could not find container \"94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd\": container with ID starting with 94b5d753a0a096569c2160152c6c886a79cb280fdf23304f6e7125a8a857d9fd not found: ID does not exist" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.078637 4808 scope.go:117] "RemoveContainer" containerID="95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006" Feb 17 17:09:39 crc kubenswrapper[4808]: E0217 17:09:39.079095 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006\": container with ID starting with 95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006 not found: ID does not exist" containerID="95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.079136 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006"} err="failed to get container status \"95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006\": rpc error: code = NotFound desc = could not find container \"95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006\": container with ID starting with 95360faf15a43a71492ec59485780f7ccc2c340008cbe9cd1290386950042006 not found: ID does not exist" Feb 17 17:09:39 crc kubenswrapper[4808]: I0217 17:09:39.159267 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" path="/var/lib/kubelet/pods/550853e0-a7b5-406d-bb66-8d36cb6f5f68/volumes" Feb 17 17:09:47 crc kubenswrapper[4808]: I0217 17:09:47.162657 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:09:47 crc kubenswrapper[4808]: E0217 17:09:47.163453 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:09:47 crc kubenswrapper[4808]: E0217 17:09:47.167815 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:09:48 crc kubenswrapper[4808]: E0217 17:09:48.147099 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:10:01 crc kubenswrapper[4808]: E0217 17:10:01.148392 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:10:02 crc kubenswrapper[4808]: I0217 17:10:02.146409 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:10:02 crc kubenswrapper[4808]: E0217 17:10:02.146742 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:10:02 crc kubenswrapper[4808]: E0217 17:10:02.147728 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:10:13 crc kubenswrapper[4808]: I0217 17:10:13.147834 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:10:13 crc kubenswrapper[4808]: E0217 17:10:13.290334 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:10:13 crc kubenswrapper[4808]: E0217 17:10:13.290399 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:10:13 crc kubenswrapper[4808]: E0217 17:10:13.290539 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:10:13 crc kubenswrapper[4808]: E0217 17:10:13.292651 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:10:14 crc kubenswrapper[4808]: E0217 17:10:14.146851 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:10:15 crc kubenswrapper[4808]: I0217 17:10:15.145803 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:10:15 crc kubenswrapper[4808]: E0217 17:10:15.146391 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:10:26 crc kubenswrapper[4808]: E0217 17:10:26.149142 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:10:26 crc kubenswrapper[4808]: E0217 17:10:26.150465 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:10:29 crc kubenswrapper[4808]: I0217 17:10:29.145549 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:10:29 crc kubenswrapper[4808]: E0217 17:10:29.146292 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:10:35 crc kubenswrapper[4808]: I0217 17:10:35.969759 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f2w9x"] Feb 17 17:10:35 crc kubenswrapper[4808]: E0217 17:10:35.970863 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="registry-server" Feb 17 17:10:35 crc kubenswrapper[4808]: I0217 17:10:35.970885 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="registry-server" Feb 17 17:10:35 crc kubenswrapper[4808]: E0217 17:10:35.970906 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="extract-utilities" Feb 17 17:10:35 crc kubenswrapper[4808]: I0217 17:10:35.970916 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="extract-utilities" Feb 17 17:10:35 crc kubenswrapper[4808]: E0217 17:10:35.970946 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="extract-content" Feb 17 17:10:35 crc kubenswrapper[4808]: I0217 17:10:35.970956 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="extract-content" Feb 17 17:10:35 crc kubenswrapper[4808]: I0217 17:10:35.971213 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="550853e0-a7b5-406d-bb66-8d36cb6f5f68" containerName="registry-server" Feb 17 17:10:35 crc kubenswrapper[4808]: I0217 17:10:35.973105 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:35 crc kubenswrapper[4808]: I0217 17:10:35.989398 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2w9x"] Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.072149 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zssm\" (UniqueName: \"kubernetes.io/projected/64128c02-3c74-41f3-bcdf-81c9026732ea-kube-api-access-4zssm\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.072253 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-utilities\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.072381 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-catalog-content\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.174309 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-utilities\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.174422 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-catalog-content\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.174512 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zssm\" (UniqueName: \"kubernetes.io/projected/64128c02-3c74-41f3-bcdf-81c9026732ea-kube-api-access-4zssm\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.174884 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-utilities\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.174997 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-catalog-content\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.192942 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zssm\" (UniqueName: \"kubernetes.io/projected/64128c02-3c74-41f3-bcdf-81c9026732ea-kube-api-access-4zssm\") pod \"redhat-marketplace-f2w9x\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.303984 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:36 crc kubenswrapper[4808]: I0217 17:10:36.804316 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2w9x"] Feb 17 17:10:37 crc kubenswrapper[4808]: I0217 17:10:37.511506 4808 generic.go:334] "Generic (PLEG): container finished" podID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerID="aa60336fee41c015063cc250ca6ff139627382ec60ae1f64c76d7bc307a3dd39" exitCode=0 Feb 17 17:10:37 crc kubenswrapper[4808]: I0217 17:10:37.511562 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2w9x" event={"ID":"64128c02-3c74-41f3-bcdf-81c9026732ea","Type":"ContainerDied","Data":"aa60336fee41c015063cc250ca6ff139627382ec60ae1f64c76d7bc307a3dd39"} Feb 17 17:10:37 crc kubenswrapper[4808]: I0217 17:10:37.511836 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2w9x" event={"ID":"64128c02-3c74-41f3-bcdf-81c9026732ea","Type":"ContainerStarted","Data":"79d1d6122d8cbc81beb9e1ded7f118ba4058e37cb6329f8f23299756afedfa1e"} Feb 17 17:10:39 crc kubenswrapper[4808]: E0217 17:10:39.147526 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:10:39 crc kubenswrapper[4808]: I0217 17:10:39.529700 4808 generic.go:334] "Generic (PLEG): container finished" podID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerID="807e02a1ff4264542ae56a6ca4ff858c09eb0c4a3460e96e70dd6f7236fa11ce" exitCode=0 Feb 17 17:10:39 crc kubenswrapper[4808]: I0217 17:10:39.529744 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2w9x" event={"ID":"64128c02-3c74-41f3-bcdf-81c9026732ea","Type":"ContainerDied","Data":"807e02a1ff4264542ae56a6ca4ff858c09eb0c4a3460e96e70dd6f7236fa11ce"} Feb 17 17:10:40 crc kubenswrapper[4808]: I0217 17:10:40.542858 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2w9x" event={"ID":"64128c02-3c74-41f3-bcdf-81c9026732ea","Type":"ContainerStarted","Data":"8febff6d9d2d99f90502520cc294ac104041c05c0bd7d3ee690e5d9b10c2d051"} Feb 17 17:10:40 crc kubenswrapper[4808]: I0217 17:10:40.567334 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f2w9x" podStartSLOduration=3.153401244 podStartE2EDuration="5.567315742s" podCreationTimestamp="2026-02-17 17:10:35 +0000 UTC" firstStartedPulling="2026-02-17 17:10:37.513890956 +0000 UTC m=+4601.030250029" lastFinishedPulling="2026-02-17 17:10:39.927805454 +0000 UTC m=+4603.444164527" observedRunningTime="2026-02-17 17:10:40.562745968 +0000 UTC m=+4604.079105061" watchObservedRunningTime="2026-02-17 17:10:40.567315742 +0000 UTC m=+4604.083674825" Feb 17 17:10:41 crc kubenswrapper[4808]: I0217 17:10:41.161038 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:10:41 crc kubenswrapper[4808]: E0217 17:10:41.161637 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:10:41 crc kubenswrapper[4808]: E0217 17:10:41.162865 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:10:42 crc kubenswrapper[4808]: I0217 17:10:42.967705 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-55v9n"] Feb 17 17:10:42 crc kubenswrapper[4808]: I0217 17:10:42.970695 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:42 crc kubenswrapper[4808]: I0217 17:10:42.990105 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55v9n"] Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.035122 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-catalog-content\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.035313 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-utilities\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.035377 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wxzr\" (UniqueName: \"kubernetes.io/projected/e6ce41d2-581f-4deb-96e5-feccc71efa4f-kube-api-access-7wxzr\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.137193 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wxzr\" (UniqueName: \"kubernetes.io/projected/e6ce41d2-581f-4deb-96e5-feccc71efa4f-kube-api-access-7wxzr\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.137354 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-catalog-content\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.137480 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-utilities\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.137999 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-catalog-content\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.138086 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-utilities\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.678500 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wxzr\" (UniqueName: \"kubernetes.io/projected/e6ce41d2-581f-4deb-96e5-feccc71efa4f-kube-api-access-7wxzr\") pod \"redhat-operators-55v9n\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:43 crc kubenswrapper[4808]: I0217 17:10:43.907499 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:44 crc kubenswrapper[4808]: I0217 17:10:44.420774 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-55v9n"] Feb 17 17:10:44 crc kubenswrapper[4808]: I0217 17:10:44.577704 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55v9n" event={"ID":"e6ce41d2-581f-4deb-96e5-feccc71efa4f","Type":"ContainerStarted","Data":"9920e8f68f09cf8f04ed794e724f8c7d31d4890592a660c4b9c794b5f1f573ac"} Feb 17 17:10:45 crc kubenswrapper[4808]: I0217 17:10:45.588626 4808 generic.go:334] "Generic (PLEG): container finished" podID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerID="8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8" exitCode=0 Feb 17 17:10:45 crc kubenswrapper[4808]: I0217 17:10:45.589078 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55v9n" event={"ID":"e6ce41d2-581f-4deb-96e5-feccc71efa4f","Type":"ContainerDied","Data":"8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8"} Feb 17 17:10:46 crc kubenswrapper[4808]: I0217 17:10:46.304120 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:46 crc kubenswrapper[4808]: I0217 17:10:46.304410 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:46 crc kubenswrapper[4808]: I0217 17:10:46.363675 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:46 crc kubenswrapper[4808]: I0217 17:10:46.598858 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55v9n" event={"ID":"e6ce41d2-581f-4deb-96e5-feccc71efa4f","Type":"ContainerStarted","Data":"fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980"} Feb 17 17:10:46 crc kubenswrapper[4808]: I0217 17:10:46.651462 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:48 crc kubenswrapper[4808]: I0217 17:10:48.756304 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2w9x"] Feb 17 17:10:48 crc kubenswrapper[4808]: I0217 17:10:48.757835 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f2w9x" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="registry-server" containerID="cri-o://8febff6d9d2d99f90502520cc294ac104041c05c0bd7d3ee690e5d9b10c2d051" gracePeriod=2 Feb 17 17:10:50 crc kubenswrapper[4808]: I0217 17:10:50.633619 4808 generic.go:334] "Generic (PLEG): container finished" podID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerID="8febff6d9d2d99f90502520cc294ac104041c05c0bd7d3ee690e5d9b10c2d051" exitCode=0 Feb 17 17:10:50 crc kubenswrapper[4808]: I0217 17:10:50.634149 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2w9x" event={"ID":"64128c02-3c74-41f3-bcdf-81c9026732ea","Type":"ContainerDied","Data":"8febff6d9d2d99f90502520cc294ac104041c05c0bd7d3ee690e5d9b10c2d051"} Feb 17 17:10:50 crc kubenswrapper[4808]: I0217 17:10:50.635507 4808 generic.go:334] "Generic (PLEG): container finished" podID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerID="fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980" exitCode=0 Feb 17 17:10:50 crc kubenswrapper[4808]: I0217 17:10:50.635534 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55v9n" event={"ID":"e6ce41d2-581f-4deb-96e5-feccc71efa4f","Type":"ContainerDied","Data":"fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980"} Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.211663 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.330705 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zssm\" (UniqueName: \"kubernetes.io/projected/64128c02-3c74-41f3-bcdf-81c9026732ea-kube-api-access-4zssm\") pod \"64128c02-3c74-41f3-bcdf-81c9026732ea\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.330798 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-utilities\") pod \"64128c02-3c74-41f3-bcdf-81c9026732ea\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.330872 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-catalog-content\") pod \"64128c02-3c74-41f3-bcdf-81c9026732ea\" (UID: \"64128c02-3c74-41f3-bcdf-81c9026732ea\") " Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.332695 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-utilities" (OuterVolumeSpecName: "utilities") pod "64128c02-3c74-41f3-bcdf-81c9026732ea" (UID: "64128c02-3c74-41f3-bcdf-81c9026732ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.338965 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64128c02-3c74-41f3-bcdf-81c9026732ea-kube-api-access-4zssm" (OuterVolumeSpecName: "kube-api-access-4zssm") pod "64128c02-3c74-41f3-bcdf-81c9026732ea" (UID: "64128c02-3c74-41f3-bcdf-81c9026732ea"). InnerVolumeSpecName "kube-api-access-4zssm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.351142 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64128c02-3c74-41f3-bcdf-81c9026732ea" (UID: "64128c02-3c74-41f3-bcdf-81c9026732ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.433718 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.434172 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zssm\" (UniqueName: \"kubernetes.io/projected/64128c02-3c74-41f3-bcdf-81c9026732ea-kube-api-access-4zssm\") on node \"crc\" DevicePath \"\"" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.434187 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64128c02-3c74-41f3-bcdf-81c9026732ea-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.653871 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55v9n" event={"ID":"e6ce41d2-581f-4deb-96e5-feccc71efa4f","Type":"ContainerStarted","Data":"b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c"} Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.674166 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f2w9x" event={"ID":"64128c02-3c74-41f3-bcdf-81c9026732ea","Type":"ContainerDied","Data":"79d1d6122d8cbc81beb9e1ded7f118ba4058e37cb6329f8f23299756afedfa1e"} Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.674223 4808 scope.go:117] "RemoveContainer" containerID="8febff6d9d2d99f90502520cc294ac104041c05c0bd7d3ee690e5d9b10c2d051" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.674245 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f2w9x" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.686586 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-55v9n" podStartSLOduration=4.250954671 podStartE2EDuration="9.686550102s" podCreationTimestamp="2026-02-17 17:10:42 +0000 UTC" firstStartedPulling="2026-02-17 17:10:45.590882405 +0000 UTC m=+4609.107241478" lastFinishedPulling="2026-02-17 17:10:51.026477836 +0000 UTC m=+4614.542836909" observedRunningTime="2026-02-17 17:10:51.675030619 +0000 UTC m=+4615.191389692" watchObservedRunningTime="2026-02-17 17:10:51.686550102 +0000 UTC m=+4615.202909175" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.700421 4808 scope.go:117] "RemoveContainer" containerID="807e02a1ff4264542ae56a6ca4ff858c09eb0c4a3460e96e70dd6f7236fa11ce" Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.708022 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2w9x"] Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.716545 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f2w9x"] Feb 17 17:10:51 crc kubenswrapper[4808]: I0217 17:10:51.734772 4808 scope.go:117] "RemoveContainer" containerID="aa60336fee41c015063cc250ca6ff139627382ec60ae1f64c76d7bc307a3dd39" Feb 17 17:10:52 crc kubenswrapper[4808]: E0217 17:10:52.272688 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:10:52 crc kubenswrapper[4808]: E0217 17:10:52.272755 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:10:52 crc kubenswrapper[4808]: E0217 17:10:52.272893 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:10:52 crc kubenswrapper[4808]: E0217 17:10:52.274136 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:10:53 crc kubenswrapper[4808]: I0217 17:10:53.159941 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" path="/var/lib/kubelet/pods/64128c02-3c74-41f3-bcdf-81c9026732ea/volumes" Feb 17 17:10:53 crc kubenswrapper[4808]: I0217 17:10:53.908064 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:53 crc kubenswrapper[4808]: I0217 17:10:53.908115 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:10:54 crc kubenswrapper[4808]: I0217 17:10:54.146235 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:10:54 crc kubenswrapper[4808]: E0217 17:10:54.146774 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:10:54 crc kubenswrapper[4808]: I0217 17:10:54.958626 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-55v9n" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="registry-server" probeResult="failure" output=< Feb 17 17:10:54 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:10:54 crc kubenswrapper[4808]: > Feb 17 17:10:55 crc kubenswrapper[4808]: E0217 17:10:55.147648 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:11:04 crc kubenswrapper[4808]: E0217 17:11:04.150097 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:11:04 crc kubenswrapper[4808]: I0217 17:11:04.416661 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:11:04 crc kubenswrapper[4808]: I0217 17:11:04.481666 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:11:04 crc kubenswrapper[4808]: I0217 17:11:04.667786 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55v9n"] Feb 17 17:11:05 crc kubenswrapper[4808]: I0217 17:11:05.807349 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-55v9n" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="registry-server" containerID="cri-o://b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c" gracePeriod=2 Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.145393 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:11:06 crc kubenswrapper[4808]: E0217 17:11:06.146066 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.445666 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.553187 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wxzr\" (UniqueName: \"kubernetes.io/projected/e6ce41d2-581f-4deb-96e5-feccc71efa4f-kube-api-access-7wxzr\") pod \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.553252 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-catalog-content\") pod \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.553308 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-utilities\") pod \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\" (UID: \"e6ce41d2-581f-4deb-96e5-feccc71efa4f\") " Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.554462 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-utilities" (OuterVolumeSpecName: "utilities") pod "e6ce41d2-581f-4deb-96e5-feccc71efa4f" (UID: "e6ce41d2-581f-4deb-96e5-feccc71efa4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.559775 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6ce41d2-581f-4deb-96e5-feccc71efa4f-kube-api-access-7wxzr" (OuterVolumeSpecName: "kube-api-access-7wxzr") pod "e6ce41d2-581f-4deb-96e5-feccc71efa4f" (UID: "e6ce41d2-581f-4deb-96e5-feccc71efa4f"). InnerVolumeSpecName "kube-api-access-7wxzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.656260 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wxzr\" (UniqueName: \"kubernetes.io/projected/e6ce41d2-581f-4deb-96e5-feccc71efa4f-kube-api-access-7wxzr\") on node \"crc\" DevicePath \"\"" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.656488 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.683405 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6ce41d2-581f-4deb-96e5-feccc71efa4f" (UID: "e6ce41d2-581f-4deb-96e5-feccc71efa4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.758952 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6ce41d2-581f-4deb-96e5-feccc71efa4f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.818599 4808 generic.go:334] "Generic (PLEG): container finished" podID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerID="b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c" exitCode=0 Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.818648 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55v9n" event={"ID":"e6ce41d2-581f-4deb-96e5-feccc71efa4f","Type":"ContainerDied","Data":"b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c"} Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.818657 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-55v9n" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.818684 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-55v9n" event={"ID":"e6ce41d2-581f-4deb-96e5-feccc71efa4f","Type":"ContainerDied","Data":"9920e8f68f09cf8f04ed794e724f8c7d31d4890592a660c4b9c794b5f1f573ac"} Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.818720 4808 scope.go:117] "RemoveContainer" containerID="b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.843913 4808 scope.go:117] "RemoveContainer" containerID="fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.861079 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-55v9n"] Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.877152 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-55v9n"] Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.895988 4808 scope.go:117] "RemoveContainer" containerID="8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.928498 4808 scope.go:117] "RemoveContainer" containerID="b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c" Feb 17 17:11:06 crc kubenswrapper[4808]: E0217 17:11:06.929002 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c\": container with ID starting with b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c not found: ID does not exist" containerID="b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.929042 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c"} err="failed to get container status \"b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c\": rpc error: code = NotFound desc = could not find container \"b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c\": container with ID starting with b393b8e3494270ed30cac2372010ca50c57807eb489f7c59fdf9fe1bcc69cb6c not found: ID does not exist" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.929068 4808 scope.go:117] "RemoveContainer" containerID="fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980" Feb 17 17:11:06 crc kubenswrapper[4808]: E0217 17:11:06.929429 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980\": container with ID starting with fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980 not found: ID does not exist" containerID="fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.929475 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980"} err="failed to get container status \"fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980\": rpc error: code = NotFound desc = could not find container \"fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980\": container with ID starting with fc5855197c0256c75450a3f0185d1f9d8e380293721c0c4bafa7140615b1b980 not found: ID does not exist" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.929516 4808 scope.go:117] "RemoveContainer" containerID="8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8" Feb 17 17:11:06 crc kubenswrapper[4808]: E0217 17:11:06.929842 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8\": container with ID starting with 8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8 not found: ID does not exist" containerID="8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8" Feb 17 17:11:06 crc kubenswrapper[4808]: I0217 17:11:06.929865 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8"} err="failed to get container status \"8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8\": rpc error: code = NotFound desc = could not find container \"8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8\": container with ID starting with 8cf6f7d4eabc01eb6246e1b5cdad2eb514395905564a946054ac6d48157187a8 not found: ID does not exist" Feb 17 17:11:07 crc kubenswrapper[4808]: I0217 17:11:07.158142 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" path="/var/lib/kubelet/pods/e6ce41d2-581f-4deb-96e5-feccc71efa4f/volumes" Feb 17 17:11:09 crc kubenswrapper[4808]: E0217 17:11:09.148420 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:11:18 crc kubenswrapper[4808]: I0217 17:11:18.145748 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:11:18 crc kubenswrapper[4808]: E0217 17:11:18.146523 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:11:18 crc kubenswrapper[4808]: E0217 17:11:18.148946 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:11:24 crc kubenswrapper[4808]: E0217 17:11:24.148025 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:11:30 crc kubenswrapper[4808]: E0217 17:11:30.147829 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:11:33 crc kubenswrapper[4808]: I0217 17:11:33.145989 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:11:33 crc kubenswrapper[4808]: E0217 17:11:33.146645 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.033789 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl"] Feb 17 17:11:37 crc kubenswrapper[4808]: E0217 17:11:37.034530 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="extract-utilities" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034545 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="extract-utilities" Feb 17 17:11:37 crc kubenswrapper[4808]: E0217 17:11:37.034560 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="registry-server" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034566 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="registry-server" Feb 17 17:11:37 crc kubenswrapper[4808]: E0217 17:11:37.034589 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="extract-utilities" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034596 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="extract-utilities" Feb 17 17:11:37 crc kubenswrapper[4808]: E0217 17:11:37.034610 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="extract-content" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034615 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="extract-content" Feb 17 17:11:37 crc kubenswrapper[4808]: E0217 17:11:37.034632 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="registry-server" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034639 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="registry-server" Feb 17 17:11:37 crc kubenswrapper[4808]: E0217 17:11:37.034657 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="extract-content" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034663 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="extract-content" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034923 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="64128c02-3c74-41f3-bcdf-81c9026732ea" containerName="registry-server" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.034952 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6ce41d2-581f-4deb-96e5-feccc71efa4f" containerName="registry-server" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.035882 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.041748 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-gpcsv" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.041803 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.042521 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.045241 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.053752 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl"] Feb 17 17:11:37 crc kubenswrapper[4808]: E0217 17:11:37.178464 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.194488 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6mwd\" (UniqueName: \"kubernetes.io/projected/8b75e2b3-ab6a-4088-897b-7a11da62a654-kube-api-access-w6mwd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.194560 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.194663 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.297263 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6mwd\" (UniqueName: \"kubernetes.io/projected/8b75e2b3-ab6a-4088-897b-7a11da62a654-kube-api-access-w6mwd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.297360 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.297472 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.303496 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.305093 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.314768 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6mwd\" (UniqueName: \"kubernetes.io/projected/8b75e2b3-ab6a-4088-897b-7a11da62a654-kube-api-access-w6mwd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.359674 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:11:37 crc kubenswrapper[4808]: I0217 17:11:37.886225 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl"] Feb 17 17:11:38 crc kubenswrapper[4808]: I0217 17:11:38.136461 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" event={"ID":"8b75e2b3-ab6a-4088-897b-7a11da62a654","Type":"ContainerStarted","Data":"51b0c8c29ac10b4d9baa4163a7a8c609d16873c474ab44261d797cf1ed54691b"} Feb 17 17:11:39 crc kubenswrapper[4808]: I0217 17:11:39.158480 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" event={"ID":"8b75e2b3-ab6a-4088-897b-7a11da62a654","Type":"ContainerStarted","Data":"567a499a540dcc4f77c295be8cc3ad41d4b2ef5fffbee3f75374436d200ff856"} Feb 17 17:11:39 crc kubenswrapper[4808]: I0217 17:11:39.168982 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" podStartSLOduration=1.72798361 podStartE2EDuration="2.168957646s" podCreationTimestamp="2026-02-17 17:11:37 +0000 UTC" firstStartedPulling="2026-02-17 17:11:37.893193339 +0000 UTC m=+4661.409552402" lastFinishedPulling="2026-02-17 17:11:38.334167365 +0000 UTC m=+4661.850526438" observedRunningTime="2026-02-17 17:11:39.158449541 +0000 UTC m=+4662.674808634" watchObservedRunningTime="2026-02-17 17:11:39.168957646 +0000 UTC m=+4662.685316729" Feb 17 17:11:44 crc kubenswrapper[4808]: I0217 17:11:44.146029 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:11:44 crc kubenswrapper[4808]: E0217 17:11:44.147013 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:11:44 crc kubenswrapper[4808]: E0217 17:11:44.148750 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:11:52 crc kubenswrapper[4808]: E0217 17:11:52.148913 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:11:58 crc kubenswrapper[4808]: I0217 17:11:58.145465 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:11:58 crc kubenswrapper[4808]: E0217 17:11:58.146272 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:11:59 crc kubenswrapper[4808]: E0217 17:11:59.148347 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:12:03 crc kubenswrapper[4808]: E0217 17:12:03.149174 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:12:10 crc kubenswrapper[4808]: I0217 17:12:10.145965 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:12:10 crc kubenswrapper[4808]: E0217 17:12:10.146718 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:12:11 crc kubenswrapper[4808]: E0217 17:12:11.148981 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:12:16 crc kubenswrapper[4808]: E0217 17:12:16.147852 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:12:21 crc kubenswrapper[4808]: I0217 17:12:21.145369 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:12:21 crc kubenswrapper[4808]: E0217 17:12:21.146269 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:12:24 crc kubenswrapper[4808]: E0217 17:12:24.149513 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:12:29 crc kubenswrapper[4808]: E0217 17:12:29.150256 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:12:34 crc kubenswrapper[4808]: I0217 17:12:34.146659 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:12:35 crc kubenswrapper[4808]: I0217 17:12:35.720845 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"58bcbbc2c5e0ad864e56ef85b7ac0fac1bf31a5ac704070c7ce20d28c92d2ac6"} Feb 17 17:12:38 crc kubenswrapper[4808]: E0217 17:12:38.149180 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:12:41 crc kubenswrapper[4808]: E0217 17:12:41.148321 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:12:50 crc kubenswrapper[4808]: E0217 17:12:50.150514 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:12:52 crc kubenswrapper[4808]: E0217 17:12:52.147417 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:13:03 crc kubenswrapper[4808]: E0217 17:13:03.147286 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:13:05 crc kubenswrapper[4808]: E0217 17:13:05.147563 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:13:14 crc kubenswrapper[4808]: E0217 17:13:14.148894 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:13:17 crc kubenswrapper[4808]: E0217 17:13:17.157714 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:13:26 crc kubenswrapper[4808]: E0217 17:13:26.147024 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:13:31 crc kubenswrapper[4808]: E0217 17:13:31.150470 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.679868 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zvffh"] Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.683125 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.692812 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zvffh"] Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.859752 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4mhv\" (UniqueName: \"kubernetes.io/projected/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-kube-api-access-q4mhv\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.859793 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-utilities\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.859981 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-catalog-content\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.962944 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-catalog-content\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.963278 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4mhv\" (UniqueName: \"kubernetes.io/projected/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-kube-api-access-q4mhv\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.963326 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-utilities\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.963501 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-catalog-content\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:36 crc kubenswrapper[4808]: I0217 17:13:36.963780 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-utilities\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:37 crc kubenswrapper[4808]: I0217 17:13:37.176496 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4mhv\" (UniqueName: \"kubernetes.io/projected/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-kube-api-access-q4mhv\") pod \"community-operators-zvffh\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:37 crc kubenswrapper[4808]: I0217 17:13:37.307252 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:37 crc kubenswrapper[4808]: I0217 17:13:37.786287 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zvffh"] Feb 17 17:13:38 crc kubenswrapper[4808]: I0217 17:13:38.344850 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerID="1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929" exitCode=0 Feb 17 17:13:38 crc kubenswrapper[4808]: I0217 17:13:38.344912 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvffh" event={"ID":"ca6bd2a4-d763-4e62-987d-a92c0b70ab23","Type":"ContainerDied","Data":"1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929"} Feb 17 17:13:38 crc kubenswrapper[4808]: I0217 17:13:38.345074 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvffh" event={"ID":"ca6bd2a4-d763-4e62-987d-a92c0b70ab23","Type":"ContainerStarted","Data":"625c4fef6bea75edcad876b08bf830934120d670591765b411b4986a0f6c1872"} Feb 17 17:13:40 crc kubenswrapper[4808]: E0217 17:13:40.148423 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:13:40 crc kubenswrapper[4808]: I0217 17:13:40.370417 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerID="25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d" exitCode=0 Feb 17 17:13:40 crc kubenswrapper[4808]: I0217 17:13:40.370495 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvffh" event={"ID":"ca6bd2a4-d763-4e62-987d-a92c0b70ab23","Type":"ContainerDied","Data":"25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d"} Feb 17 17:13:41 crc kubenswrapper[4808]: I0217 17:13:41.381477 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvffh" event={"ID":"ca6bd2a4-d763-4e62-987d-a92c0b70ab23","Type":"ContainerStarted","Data":"5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768"} Feb 17 17:13:41 crc kubenswrapper[4808]: I0217 17:13:41.406938 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zvffh" podStartSLOduration=3.005038174 podStartE2EDuration="5.406917335s" podCreationTimestamp="2026-02-17 17:13:36 +0000 UTC" firstStartedPulling="2026-02-17 17:13:38.34726305 +0000 UTC m=+4781.863622133" lastFinishedPulling="2026-02-17 17:13:40.749142221 +0000 UTC m=+4784.265501294" observedRunningTime="2026-02-17 17:13:41.397099588 +0000 UTC m=+4784.913458671" watchObservedRunningTime="2026-02-17 17:13:41.406917335 +0000 UTC m=+4784.923276418" Feb 17 17:13:45 crc kubenswrapper[4808]: E0217 17:13:45.148928 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:13:47 crc kubenswrapper[4808]: I0217 17:13:47.307920 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:47 crc kubenswrapper[4808]: I0217 17:13:47.308244 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:47 crc kubenswrapper[4808]: I0217 17:13:47.366837 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:47 crc kubenswrapper[4808]: I0217 17:13:47.488022 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:47 crc kubenswrapper[4808]: I0217 17:13:47.607257 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zvffh"] Feb 17 17:13:49 crc kubenswrapper[4808]: I0217 17:13:49.462339 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zvffh" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="registry-server" containerID="cri-o://5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768" gracePeriod=2 Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.001588 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.148313 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-utilities\") pod \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.148369 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-catalog-content\") pod \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.148462 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4mhv\" (UniqueName: \"kubernetes.io/projected/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-kube-api-access-q4mhv\") pod \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\" (UID: \"ca6bd2a4-d763-4e62-987d-a92c0b70ab23\") " Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.149284 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-utilities" (OuterVolumeSpecName: "utilities") pod "ca6bd2a4-d763-4e62-987d-a92c0b70ab23" (UID: "ca6bd2a4-d763-4e62-987d-a92c0b70ab23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.154314 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-kube-api-access-q4mhv" (OuterVolumeSpecName: "kube-api-access-q4mhv") pod "ca6bd2a4-d763-4e62-987d-a92c0b70ab23" (UID: "ca6bd2a4-d763-4e62-987d-a92c0b70ab23"). InnerVolumeSpecName "kube-api-access-q4mhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.252736 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.252796 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4mhv\" (UniqueName: \"kubernetes.io/projected/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-kube-api-access-q4mhv\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.472445 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerID="5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768" exitCode=0 Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.472489 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvffh" event={"ID":"ca6bd2a4-d763-4e62-987d-a92c0b70ab23","Type":"ContainerDied","Data":"5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768"} Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.472501 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvffh" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.472548 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvffh" event={"ID":"ca6bd2a4-d763-4e62-987d-a92c0b70ab23","Type":"ContainerDied","Data":"625c4fef6bea75edcad876b08bf830934120d670591765b411b4986a0f6c1872"} Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.472589 4808 scope.go:117] "RemoveContainer" containerID="5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.489981 4808 scope.go:117] "RemoveContainer" containerID="25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.510731 4808 scope.go:117] "RemoveContainer" containerID="1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.573617 4808 scope.go:117] "RemoveContainer" containerID="5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768" Feb 17 17:13:50 crc kubenswrapper[4808]: E0217 17:13:50.574065 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768\": container with ID starting with 5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768 not found: ID does not exist" containerID="5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.574101 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768"} err="failed to get container status \"5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768\": rpc error: code = NotFound desc = could not find container \"5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768\": container with ID starting with 5777be6f307edad954fa4106dc511251ac2aab53db20b017d9c79b97cda20768 not found: ID does not exist" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.574128 4808 scope.go:117] "RemoveContainer" containerID="25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d" Feb 17 17:13:50 crc kubenswrapper[4808]: E0217 17:13:50.574376 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d\": container with ID starting with 25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d not found: ID does not exist" containerID="25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.574400 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d"} err="failed to get container status \"25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d\": rpc error: code = NotFound desc = could not find container \"25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d\": container with ID starting with 25f65877411676a4ea1e8683f40495c1e0d393dae14df9e0e54782496ba8b60d not found: ID does not exist" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.574414 4808 scope.go:117] "RemoveContainer" containerID="1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929" Feb 17 17:13:50 crc kubenswrapper[4808]: E0217 17:13:50.574718 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929\": container with ID starting with 1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929 not found: ID does not exist" containerID="1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.574768 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929"} err="failed to get container status \"1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929\": rpc error: code = NotFound desc = could not find container \"1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929\": container with ID starting with 1e4954d8d53e14d207092fd1ff1c65c599e51163b8bd26a9c95ae2fa67233929 not found: ID does not exist" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.953675 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca6bd2a4-d763-4e62-987d-a92c0b70ab23" (UID: "ca6bd2a4-d763-4e62-987d-a92c0b70ab23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:13:50 crc kubenswrapper[4808]: I0217 17:13:50.978114 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca6bd2a4-d763-4e62-987d-a92c0b70ab23-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:13:51 crc kubenswrapper[4808]: I0217 17:13:51.165027 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zvffh"] Feb 17 17:13:51 crc kubenswrapper[4808]: I0217 17:13:51.165061 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zvffh"] Feb 17 17:13:53 crc kubenswrapper[4808]: I0217 17:13:53.157197 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" path="/var/lib/kubelet/pods/ca6bd2a4-d763-4e62-987d-a92c0b70ab23/volumes" Feb 17 17:13:55 crc kubenswrapper[4808]: E0217 17:13:55.148228 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:13:59 crc kubenswrapper[4808]: E0217 17:13:59.151377 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:14:10 crc kubenswrapper[4808]: E0217 17:14:10.160064 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:14:12 crc kubenswrapper[4808]: E0217 17:14:12.183276 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:14:24 crc kubenswrapper[4808]: E0217 17:14:24.149098 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:14:25 crc kubenswrapper[4808]: E0217 17:14:25.149216 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:14:37 crc kubenswrapper[4808]: E0217 17:14:37.160565 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:14:38 crc kubenswrapper[4808]: E0217 17:14:38.148865 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:14:48 crc kubenswrapper[4808]: E0217 17:14:48.148884 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:14:51 crc kubenswrapper[4808]: E0217 17:14:51.148119 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:14:51 crc kubenswrapper[4808]: I0217 17:14:51.592595 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:14:51 crc kubenswrapper[4808]: I0217 17:14:51.592876 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:14:59 crc kubenswrapper[4808]: E0217 17:14:59.148952 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.160825 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9"] Feb 17 17:15:00 crc kubenswrapper[4808]: E0217 17:15:00.161675 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="extract-content" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.161693 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="extract-content" Feb 17 17:15:00 crc kubenswrapper[4808]: E0217 17:15:00.161706 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.161713 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4808]: E0217 17:15:00.161735 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="extract-utilities" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.161743 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="extract-utilities" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.162112 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca6bd2a4-d763-4e62-987d-a92c0b70ab23" containerName="registry-server" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.163061 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.164936 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.165090 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.171059 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9"] Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.248454 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-secret-volume\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.248527 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-config-volume\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.248555 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzc8h\" (UniqueName: \"kubernetes.io/projected/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-kube-api-access-tzc8h\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.350352 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-secret-volume\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.350463 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-config-volume\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.350500 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzc8h\" (UniqueName: \"kubernetes.io/projected/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-kube-api-access-tzc8h\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.351344 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-config-volume\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.379800 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-secret-volume\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.380933 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzc8h\" (UniqueName: \"kubernetes.io/projected/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-kube-api-access-tzc8h\") pod \"collect-profiles-29522475-hrfk9\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.484080 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:00 crc kubenswrapper[4808]: I0217 17:15:00.955922 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9"] Feb 17 17:15:01 crc kubenswrapper[4808]: I0217 17:15:01.144343 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" event={"ID":"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6","Type":"ContainerStarted","Data":"4291df65054dcd470f5111a9574440824feb650adcbbe4cc8e3880fa44689cf1"} Feb 17 17:15:02 crc kubenswrapper[4808]: I0217 17:15:02.157192 4808 generic.go:334] "Generic (PLEG): container finished" podID="7ac3bf12-5c8e-40fe-b51b-c7629260bbd6" containerID="cc95aa572ff0403bb73e21beb9f0dc29f6d5c4ca75ea590e0734ae58b602f1f0" exitCode=0 Feb 17 17:15:02 crc kubenswrapper[4808]: I0217 17:15:02.157491 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" event={"ID":"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6","Type":"ContainerDied","Data":"cc95aa572ff0403bb73e21beb9f0dc29f6d5c4ca75ea590e0734ae58b602f1f0"} Feb 17 17:15:03 crc kubenswrapper[4808]: E0217 17:15:03.147794 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.571040 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.722860 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzc8h\" (UniqueName: \"kubernetes.io/projected/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-kube-api-access-tzc8h\") pod \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.723014 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-config-volume\") pod \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.723057 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-secret-volume\") pod \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\" (UID: \"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6\") " Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.723900 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-config-volume" (OuterVolumeSpecName: "config-volume") pod "7ac3bf12-5c8e-40fe-b51b-c7629260bbd6" (UID: "7ac3bf12-5c8e-40fe-b51b-c7629260bbd6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.731720 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7ac3bf12-5c8e-40fe-b51b-c7629260bbd6" (UID: "7ac3bf12-5c8e-40fe-b51b-c7629260bbd6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.734800 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-kube-api-access-tzc8h" (OuterVolumeSpecName: "kube-api-access-tzc8h") pod "7ac3bf12-5c8e-40fe-b51b-c7629260bbd6" (UID: "7ac3bf12-5c8e-40fe-b51b-c7629260bbd6"). InnerVolumeSpecName "kube-api-access-tzc8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.826162 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzc8h\" (UniqueName: \"kubernetes.io/projected/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-kube-api-access-tzc8h\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.826194 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:03 crc kubenswrapper[4808]: I0217 17:15:03.826203 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ac3bf12-5c8e-40fe-b51b-c7629260bbd6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:15:04 crc kubenswrapper[4808]: I0217 17:15:04.180684 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" event={"ID":"7ac3bf12-5c8e-40fe-b51b-c7629260bbd6","Type":"ContainerDied","Data":"4291df65054dcd470f5111a9574440824feb650adcbbe4cc8e3880fa44689cf1"} Feb 17 17:15:04 crc kubenswrapper[4808]: I0217 17:15:04.180731 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522475-hrfk9" Feb 17 17:15:04 crc kubenswrapper[4808]: I0217 17:15:04.180746 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4291df65054dcd470f5111a9574440824feb650adcbbe4cc8e3880fa44689cf1" Feb 17 17:15:04 crc kubenswrapper[4808]: I0217 17:15:04.654098 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b"] Feb 17 17:15:04 crc kubenswrapper[4808]: I0217 17:15:04.665354 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522430-jhp9b"] Feb 17 17:15:05 crc kubenswrapper[4808]: I0217 17:15:05.165122 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f89f01-6a5d-4eb4-adc9-cbfbd921accf" path="/var/lib/kubelet/pods/e5f89f01-6a5d-4eb4-adc9-cbfbd921accf/volumes" Feb 17 17:15:11 crc kubenswrapper[4808]: E0217 17:15:11.148391 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:15:15 crc kubenswrapper[4808]: E0217 17:15:15.149910 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:15:19 crc kubenswrapper[4808]: I0217 17:15:19.046496 4808 scope.go:117] "RemoveContainer" containerID="c5ba79dcf1a3ea436f18f622b5a896f04d2d690a78e981b12dc981865c236bbe" Feb 17 17:15:21 crc kubenswrapper[4808]: I0217 17:15:21.592995 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:15:21 crc kubenswrapper[4808]: I0217 17:15:21.593297 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:15:24 crc kubenswrapper[4808]: I0217 17:15:24.149617 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:15:24 crc kubenswrapper[4808]: E0217 17:15:24.273544 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:15:24 crc kubenswrapper[4808]: E0217 17:15:24.274174 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:15:24 crc kubenswrapper[4808]: E0217 17:15:24.274406 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:15:24 crc kubenswrapper[4808]: E0217 17:15:24.275870 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:15:28 crc kubenswrapper[4808]: E0217 17:15:28.149687 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:15:36 crc kubenswrapper[4808]: E0217 17:15:36.147835 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:15:42 crc kubenswrapper[4808]: E0217 17:15:42.149213 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:15:50 crc kubenswrapper[4808]: E0217 17:15:50.147935 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.592482 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.592763 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.592807 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.593561 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58bcbbc2c5e0ad864e56ef85b7ac0fac1bf31a5ac704070c7ce20d28c92d2ac6"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.593688 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://58bcbbc2c5e0ad864e56ef85b7ac0fac1bf31a5ac704070c7ce20d28c92d2ac6" gracePeriod=600 Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.722856 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="58bcbbc2c5e0ad864e56ef85b7ac0fac1bf31a5ac704070c7ce20d28c92d2ac6" exitCode=0 Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.722907 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"58bcbbc2c5e0ad864e56ef85b7ac0fac1bf31a5ac704070c7ce20d28c92d2ac6"} Feb 17 17:15:51 crc kubenswrapper[4808]: I0217 17:15:51.722945 4808 scope.go:117] "RemoveContainer" containerID="8c4199e704474ea94fecd76ffd4e953c14d6c8288f54377aa2b3edb555caf82d" Feb 17 17:15:52 crc kubenswrapper[4808]: I0217 17:15:52.754305 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e"} Feb 17 17:15:55 crc kubenswrapper[4808]: E0217 17:15:55.275960 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:15:55 crc kubenswrapper[4808]: E0217 17:15:55.276656 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:15:55 crc kubenswrapper[4808]: E0217 17:15:55.276822 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:15:55 crc kubenswrapper[4808]: E0217 17:15:55.278983 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:16:05 crc kubenswrapper[4808]: E0217 17:16:05.151062 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:16:09 crc kubenswrapper[4808]: E0217 17:16:09.147774 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:16:17 crc kubenswrapper[4808]: E0217 17:16:17.162284 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:16:24 crc kubenswrapper[4808]: E0217 17:16:24.147944 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:16:32 crc kubenswrapper[4808]: E0217 17:16:32.148518 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:16:39 crc kubenswrapper[4808]: E0217 17:16:39.147469 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:16:43 crc kubenswrapper[4808]: E0217 17:16:43.147052 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:16:50 crc kubenswrapper[4808]: E0217 17:16:50.148337 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:16:57 crc kubenswrapper[4808]: E0217 17:16:57.162252 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:17:04 crc kubenswrapper[4808]: E0217 17:17:04.150650 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:17:09 crc kubenswrapper[4808]: E0217 17:17:09.147895 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:17:18 crc kubenswrapper[4808]: E0217 17:17:18.148215 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:17:22 crc kubenswrapper[4808]: E0217 17:17:22.148309 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:17:32 crc kubenswrapper[4808]: E0217 17:17:32.149416 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:17:36 crc kubenswrapper[4808]: E0217 17:17:36.148416 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:17:44 crc kubenswrapper[4808]: E0217 17:17:44.150065 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:17:51 crc kubenswrapper[4808]: E0217 17:17:51.149030 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:17:51 crc kubenswrapper[4808]: I0217 17:17:51.592775 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:17:51 crc kubenswrapper[4808]: I0217 17:17:51.593048 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:17:51 crc kubenswrapper[4808]: I0217 17:17:51.922331 4808 generic.go:334] "Generic (PLEG): container finished" podID="8b75e2b3-ab6a-4088-897b-7a11da62a654" containerID="567a499a540dcc4f77c295be8cc3ad41d4b2ef5fffbee3f75374436d200ff856" exitCode=2 Feb 17 17:17:51 crc kubenswrapper[4808]: I0217 17:17:51.922373 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" event={"ID":"8b75e2b3-ab6a-4088-897b-7a11da62a654","Type":"ContainerDied","Data":"567a499a540dcc4f77c295be8cc3ad41d4b2ef5fffbee3f75374436d200ff856"} Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.455565 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.538398 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-inventory\") pod \"8b75e2b3-ab6a-4088-897b-7a11da62a654\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.538560 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-ssh-key-openstack-edpm-ipam\") pod \"8b75e2b3-ab6a-4088-897b-7a11da62a654\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.538607 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6mwd\" (UniqueName: \"kubernetes.io/projected/8b75e2b3-ab6a-4088-897b-7a11da62a654-kube-api-access-w6mwd\") pod \"8b75e2b3-ab6a-4088-897b-7a11da62a654\" (UID: \"8b75e2b3-ab6a-4088-897b-7a11da62a654\") " Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.544796 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b75e2b3-ab6a-4088-897b-7a11da62a654-kube-api-access-w6mwd" (OuterVolumeSpecName: "kube-api-access-w6mwd") pod "8b75e2b3-ab6a-4088-897b-7a11da62a654" (UID: "8b75e2b3-ab6a-4088-897b-7a11da62a654"). InnerVolumeSpecName "kube-api-access-w6mwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.568080 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8b75e2b3-ab6a-4088-897b-7a11da62a654" (UID: "8b75e2b3-ab6a-4088-897b-7a11da62a654"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.573246 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-inventory" (OuterVolumeSpecName: "inventory") pod "8b75e2b3-ab6a-4088-897b-7a11da62a654" (UID: "8b75e2b3-ab6a-4088-897b-7a11da62a654"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.642137 4808 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-inventory\") on node \"crc\" DevicePath \"\"" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.642180 4808 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b75e2b3-ab6a-4088-897b-7a11da62a654-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.642194 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6mwd\" (UniqueName: \"kubernetes.io/projected/8b75e2b3-ab6a-4088-897b-7a11da62a654-kube-api-access-w6mwd\") on node \"crc\" DevicePath \"\"" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.941462 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" event={"ID":"8b75e2b3-ab6a-4088-897b-7a11da62a654","Type":"ContainerDied","Data":"51b0c8c29ac10b4d9baa4163a7a8c609d16873c474ab44261d797cf1ed54691b"} Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.941502 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51b0c8c29ac10b4d9baa4163a7a8c609d16873c474ab44261d797cf1ed54691b" Feb 17 17:17:53 crc kubenswrapper[4808]: I0217 17:17:53.941609 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl" Feb 17 17:17:58 crc kubenswrapper[4808]: E0217 17:17:58.148250 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:18:06 crc kubenswrapper[4808]: E0217 17:18:06.147457 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:18:13 crc kubenswrapper[4808]: E0217 17:18:13.147753 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:18:19 crc kubenswrapper[4808]: E0217 17:18:19.147735 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:18:21 crc kubenswrapper[4808]: I0217 17:18:21.592267 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:18:21 crc kubenswrapper[4808]: I0217 17:18:21.593616 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:18:24 crc kubenswrapper[4808]: E0217 17:18:24.149266 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:18:34 crc kubenswrapper[4808]: E0217 17:18:34.935478 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:18:34 crc kubenswrapper[4808]: I0217 17:18:34.963483 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-c58vl" podUID="42711d14-278f-41eb-80ce-2e67add356b9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 17 17:18:37 crc kubenswrapper[4808]: E0217 17:18:37.156841 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:18:49 crc kubenswrapper[4808]: E0217 17:18:49.150819 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:18:51 crc kubenswrapper[4808]: E0217 17:18:51.150829 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:18:51 crc kubenswrapper[4808]: I0217 17:18:51.592266 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:18:51 crc kubenswrapper[4808]: I0217 17:18:51.592344 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:18:51 crc kubenswrapper[4808]: I0217 17:18:51.592399 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 17:18:51 crc kubenswrapper[4808]: I0217 17:18:51.593359 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:18:51 crc kubenswrapper[4808]: I0217 17:18:51.593482 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" gracePeriod=600 Feb 17 17:18:51 crc kubenswrapper[4808]: E0217 17:18:51.724890 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:18:51 crc kubenswrapper[4808]: E0217 17:18:51.788184 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca38b6e7_b21c_453d_8b6c_a163dac84b35.slice/crio-conmon-700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:18:52 crc kubenswrapper[4808]: I0217 17:18:52.234166 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" exitCode=0 Feb 17 17:18:52 crc kubenswrapper[4808]: I0217 17:18:52.234248 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e"} Feb 17 17:18:52 crc kubenswrapper[4808]: I0217 17:18:52.234523 4808 scope.go:117] "RemoveContainer" containerID="58bcbbc2c5e0ad864e56ef85b7ac0fac1bf31a5ac704070c7ce20d28c92d2ac6" Feb 17 17:18:52 crc kubenswrapper[4808]: I0217 17:18:52.235368 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:18:52 crc kubenswrapper[4808]: E0217 17:18:52.235705 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:19:02 crc kubenswrapper[4808]: E0217 17:19:02.148420 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:19:05 crc kubenswrapper[4808]: E0217 17:19:05.164481 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:19:07 crc kubenswrapper[4808]: I0217 17:19:07.152252 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:19:07 crc kubenswrapper[4808]: E0217 17:19:07.152902 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:19:16 crc kubenswrapper[4808]: E0217 17:19:16.147479 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:19:17 crc kubenswrapper[4808]: E0217 17:19:17.158104 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.274068 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v84wc/must-gather-25mrk"] Feb 17 17:19:19 crc kubenswrapper[4808]: E0217 17:19:19.283820 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac3bf12-5c8e-40fe-b51b-c7629260bbd6" containerName="collect-profiles" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.283862 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac3bf12-5c8e-40fe-b51b-c7629260bbd6" containerName="collect-profiles" Feb 17 17:19:19 crc kubenswrapper[4808]: E0217 17:19:19.283872 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b75e2b3-ab6a-4088-897b-7a11da62a654" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.283882 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b75e2b3-ab6a-4088-897b-7a11da62a654" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.284152 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b75e2b3-ab6a-4088-897b-7a11da62a654" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.284190 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac3bf12-5c8e-40fe-b51b-c7629260bbd6" containerName="collect-profiles" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.285801 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.289286 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v84wc"/"kube-root-ca.crt" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.289752 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v84wc"/"openshift-service-ca.crt" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.346486 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6431aef1-ada4-4683-967f-18a8a901d3f7-must-gather-output\") pod \"must-gather-25mrk\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.346675 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4xpd\" (UniqueName: \"kubernetes.io/projected/6431aef1-ada4-4683-967f-18a8a901d3f7-kube-api-access-l4xpd\") pod \"must-gather-25mrk\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.378223 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v84wc/must-gather-25mrk"] Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.448979 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6431aef1-ada4-4683-967f-18a8a901d3f7-must-gather-output\") pod \"must-gather-25mrk\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.449067 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4xpd\" (UniqueName: \"kubernetes.io/projected/6431aef1-ada4-4683-967f-18a8a901d3f7-kube-api-access-l4xpd\") pod \"must-gather-25mrk\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.449870 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6431aef1-ada4-4683-967f-18a8a901d3f7-must-gather-output\") pod \"must-gather-25mrk\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.470426 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4xpd\" (UniqueName: \"kubernetes.io/projected/6431aef1-ada4-4683-967f-18a8a901d3f7-kube-api-access-l4xpd\") pod \"must-gather-25mrk\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:19 crc kubenswrapper[4808]: I0217 17:19:19.619424 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:19:20 crc kubenswrapper[4808]: I0217 17:19:20.149214 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:19:20 crc kubenswrapper[4808]: E0217 17:19:20.150326 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:19:20 crc kubenswrapper[4808]: I0217 17:19:20.301370 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v84wc/must-gather-25mrk"] Feb 17 17:19:20 crc kubenswrapper[4808]: W0217 17:19:20.312241 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6431aef1_ada4_4683_967f_18a8a901d3f7.slice/crio-c6947f6205ae010942b8b25c11256ac17b4ff99bd3e2828634f17024b59bfe8b WatchSource:0}: Error finding container c6947f6205ae010942b8b25c11256ac17b4ff99bd3e2828634f17024b59bfe8b: Status 404 returned error can't find the container with id c6947f6205ae010942b8b25c11256ac17b4ff99bd3e2828634f17024b59bfe8b Feb 17 17:19:20 crc kubenswrapper[4808]: I0217 17:19:20.561856 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/must-gather-25mrk" event={"ID":"6431aef1-ada4-4683-967f-18a8a901d3f7","Type":"ContainerStarted","Data":"c6947f6205ae010942b8b25c11256ac17b4ff99bd3e2828634f17024b59bfe8b"} Feb 17 17:19:29 crc kubenswrapper[4808]: E0217 17:19:29.147126 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:19:29 crc kubenswrapper[4808]: I0217 17:19:29.652867 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/must-gather-25mrk" event={"ID":"6431aef1-ada4-4683-967f-18a8a901d3f7","Type":"ContainerStarted","Data":"271d9b2135c3935ec151eefdbaf495f4a45fec452012708df37252c90b672306"} Feb 17 17:19:29 crc kubenswrapper[4808]: I0217 17:19:29.652918 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/must-gather-25mrk" event={"ID":"6431aef1-ada4-4683-967f-18a8a901d3f7","Type":"ContainerStarted","Data":"c40142ef958d484b3d88ec057c33b3f5b4fdb38dd3e73ba0134c4e1e89733ac2"} Feb 17 17:19:29 crc kubenswrapper[4808]: I0217 17:19:29.677882 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v84wc/must-gather-25mrk" podStartSLOduration=2.374339833 podStartE2EDuration="10.677861219s" podCreationTimestamp="2026-02-17 17:19:19 +0000 UTC" firstStartedPulling="2026-02-17 17:19:20.314185339 +0000 UTC m=+5123.830544412" lastFinishedPulling="2026-02-17 17:19:28.617706725 +0000 UTC m=+5132.134065798" observedRunningTime="2026-02-17 17:19:29.668108432 +0000 UTC m=+5133.184467515" watchObservedRunningTime="2026-02-17 17:19:29.677861219 +0000 UTC m=+5133.194220282" Feb 17 17:19:30 crc kubenswrapper[4808]: I0217 17:19:30.953405 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4t4r8"] Feb 17 17:19:30 crc kubenswrapper[4808]: I0217 17:19:30.956633 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:30 crc kubenswrapper[4808]: I0217 17:19:30.987299 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4t4r8"] Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.133905 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzzf5\" (UniqueName: \"kubernetes.io/projected/6463b44f-0536-4c98-964e-ffefaf92dd97-kube-api-access-fzzf5\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.133981 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-utilities\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.134139 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-catalog-content\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: E0217 17:19:31.150521 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.236474 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzzf5\" (UniqueName: \"kubernetes.io/projected/6463b44f-0536-4c98-964e-ffefaf92dd97-kube-api-access-fzzf5\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.236552 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-utilities\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.236806 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-catalog-content\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.237802 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-utilities\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.237930 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-catalog-content\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.264062 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzzf5\" (UniqueName: \"kubernetes.io/projected/6463b44f-0536-4c98-964e-ffefaf92dd97-kube-api-access-fzzf5\") pod \"certified-operators-4t4r8\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:31 crc kubenswrapper[4808]: I0217 17:19:31.282188 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:32.448832 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4t4r8"] Feb 17 17:19:35 crc kubenswrapper[4808]: W0217 17:19:32.451428 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6463b44f_0536_4c98_964e_ffefaf92dd97.slice/crio-ca759b1c23a9dc6fc5be3ab80ee352689510f493b434b8aba97dd008dd4046cc WatchSource:0}: Error finding container ca759b1c23a9dc6fc5be3ab80ee352689510f493b434b8aba97dd008dd4046cc: Status 404 returned error can't find the container with id ca759b1c23a9dc6fc5be3ab80ee352689510f493b434b8aba97dd008dd4046cc Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:32.694376 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t4r8" event={"ID":"6463b44f-0536-4c98-964e-ffefaf92dd97","Type":"ContainerStarted","Data":"ca759b1c23a9dc6fc5be3ab80ee352689510f493b434b8aba97dd008dd4046cc"} Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:33.146285 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:19:35 crc kubenswrapper[4808]: E0217 17:19:33.146861 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:33.708307 4808 generic.go:334] "Generic (PLEG): container finished" podID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerID="9102d6dcaf6e3fbf8c87936c002d9f93bfb04d65b7f6656f4e84306710e44084" exitCode=0 Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:33.708361 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t4r8" event={"ID":"6463b44f-0536-4c98-964e-ffefaf92dd97","Type":"ContainerDied","Data":"9102d6dcaf6e3fbf8c87936c002d9f93bfb04d65b7f6656f4e84306710e44084"} Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.594952 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v84wc/crc-debug-msb9f"] Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.597729 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.599852 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v84wc"/"default-dockercfg-f8jxd" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.719911 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qjlq\" (UniqueName: \"kubernetes.io/projected/1421f2cf-bbb7-4679-a249-d3233f1a590a-kube-api-access-5qjlq\") pod \"crc-debug-msb9f\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.719994 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1421f2cf-bbb7-4679-a249-d3233f1a590a-host\") pod \"crc-debug-msb9f\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.821886 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1421f2cf-bbb7-4679-a249-d3233f1a590a-host\") pod \"crc-debug-msb9f\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.822078 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1421f2cf-bbb7-4679-a249-d3233f1a590a-host\") pod \"crc-debug-msb9f\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.822097 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qjlq\" (UniqueName: \"kubernetes.io/projected/1421f2cf-bbb7-4679-a249-d3233f1a590a-kube-api-access-5qjlq\") pod \"crc-debug-msb9f\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.843799 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qjlq\" (UniqueName: \"kubernetes.io/projected/1421f2cf-bbb7-4679-a249-d3233f1a590a-kube-api-access-5qjlq\") pod \"crc-debug-msb9f\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:34.926052 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:35.728182 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/crc-debug-msb9f" event={"ID":"1421f2cf-bbb7-4679-a249-d3233f1a590a","Type":"ContainerStarted","Data":"684b470c63b940008787f4d6bf54bfccbbb02315a2dd741d1a163efc01817f3e"} Feb 17 17:19:35 crc kubenswrapper[4808]: I0217 17:19:35.731630 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t4r8" event={"ID":"6463b44f-0536-4c98-964e-ffefaf92dd97","Type":"ContainerStarted","Data":"7d865228fa25e7ce12749d7c2c4de36bd67d5fa5524e81ad097c8a1b40849e1b"} Feb 17 17:19:40 crc kubenswrapper[4808]: I0217 17:19:40.822614 4808 generic.go:334] "Generic (PLEG): container finished" podID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerID="7d865228fa25e7ce12749d7c2c4de36bd67d5fa5524e81ad097c8a1b40849e1b" exitCode=0 Feb 17 17:19:40 crc kubenswrapper[4808]: I0217 17:19:40.822702 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t4r8" event={"ID":"6463b44f-0536-4c98-964e-ffefaf92dd97","Type":"ContainerDied","Data":"7d865228fa25e7ce12749d7c2c4de36bd67d5fa5524e81ad097c8a1b40849e1b"} Feb 17 17:19:41 crc kubenswrapper[4808]: E0217 17:19:41.148863 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:19:41 crc kubenswrapper[4808]: I0217 17:19:41.837713 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t4r8" event={"ID":"6463b44f-0536-4c98-964e-ffefaf92dd97","Type":"ContainerStarted","Data":"ed47e3d22836b6652cf2ffaee8f878d60a025a964ccb085ff32c6031cfeb2f0b"} Feb 17 17:19:41 crc kubenswrapper[4808]: I0217 17:19:41.869257 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4t4r8" podStartSLOduration=4.361974856 podStartE2EDuration="11.869237973s" podCreationTimestamp="2026-02-17 17:19:30 +0000 UTC" firstStartedPulling="2026-02-17 17:19:33.710627144 +0000 UTC m=+5137.226986217" lastFinishedPulling="2026-02-17 17:19:41.217890261 +0000 UTC m=+5144.734249334" observedRunningTime="2026-02-17 17:19:41.858600202 +0000 UTC m=+5145.374959285" watchObservedRunningTime="2026-02-17 17:19:41.869237973 +0000 UTC m=+5145.385597046" Feb 17 17:19:42 crc kubenswrapper[4808]: E0217 17:19:42.147922 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:19:48 crc kubenswrapper[4808]: I0217 17:19:48.147063 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:19:48 crc kubenswrapper[4808]: E0217 17:19:48.147858 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:19:48 crc kubenswrapper[4808]: I0217 17:19:48.922199 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/crc-debug-msb9f" event={"ID":"1421f2cf-bbb7-4679-a249-d3233f1a590a","Type":"ContainerStarted","Data":"fce94902885db56874aa711abdba927b17899ff624af8c260483d4d779880140"} Feb 17 17:19:48 crc kubenswrapper[4808]: I0217 17:19:48.937688 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v84wc/crc-debug-msb9f" podStartSLOduration=1.343487591 podStartE2EDuration="14.937667224s" podCreationTimestamp="2026-02-17 17:19:34 +0000 UTC" firstStartedPulling="2026-02-17 17:19:35.100208142 +0000 UTC m=+5138.616567215" lastFinishedPulling="2026-02-17 17:19:48.694387765 +0000 UTC m=+5152.210746848" observedRunningTime="2026-02-17 17:19:48.935306299 +0000 UTC m=+5152.451665372" watchObservedRunningTime="2026-02-17 17:19:48.937667224 +0000 UTC m=+5152.454026297" Feb 17 17:19:51 crc kubenswrapper[4808]: I0217 17:19:51.282975 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:51 crc kubenswrapper[4808]: I0217 17:19:51.283418 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:19:52 crc kubenswrapper[4808]: I0217 17:19:52.343184 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4t4r8" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="registry-server" probeResult="failure" output=< Feb 17 17:19:52 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:19:52 crc kubenswrapper[4808]: > Feb 17 17:19:53 crc kubenswrapper[4808]: E0217 17:19:53.147472 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:19:55 crc kubenswrapper[4808]: E0217 17:19:55.147615 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:20:01 crc kubenswrapper[4808]: I0217 17:20:01.334974 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:20:01 crc kubenswrapper[4808]: I0217 17:20:01.382017 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:20:01 crc kubenswrapper[4808]: I0217 17:20:01.570510 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4t4r8"] Feb 17 17:20:02 crc kubenswrapper[4808]: I0217 17:20:02.145716 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:20:02 crc kubenswrapper[4808]: E0217 17:20:02.146009 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:20:03 crc kubenswrapper[4808]: I0217 17:20:03.082448 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4t4r8" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="registry-server" containerID="cri-o://ed47e3d22836b6652cf2ffaee8f878d60a025a964ccb085ff32c6031cfeb2f0b" gracePeriod=2 Feb 17 17:20:05 crc kubenswrapper[4808]: I0217 17:20:05.101804 4808 generic.go:334] "Generic (PLEG): container finished" podID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerID="ed47e3d22836b6652cf2ffaee8f878d60a025a964ccb085ff32c6031cfeb2f0b" exitCode=0 Feb 17 17:20:05 crc kubenswrapper[4808]: I0217 17:20:05.101898 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t4r8" event={"ID":"6463b44f-0536-4c98-964e-ffefaf92dd97","Type":"ContainerDied","Data":"ed47e3d22836b6652cf2ffaee8f878d60a025a964ccb085ff32c6031cfeb2f0b"} Feb 17 17:20:06 crc kubenswrapper[4808]: I0217 17:20:06.121171 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4t4r8" event={"ID":"6463b44f-0536-4c98-964e-ffefaf92dd97","Type":"ContainerDied","Data":"ca759b1c23a9dc6fc5be3ab80ee352689510f493b434b8aba97dd008dd4046cc"} Feb 17 17:20:06 crc kubenswrapper[4808]: I0217 17:20:06.121816 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca759b1c23a9dc6fc5be3ab80ee352689510f493b434b8aba97dd008dd4046cc" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.032148 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.136809 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4t4r8" Feb 17 17:20:08 crc kubenswrapper[4808]: E0217 17:20:08.148275 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:20:08 crc kubenswrapper[4808]: E0217 17:20:08.148425 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.206685 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzzf5\" (UniqueName: \"kubernetes.io/projected/6463b44f-0536-4c98-964e-ffefaf92dd97-kube-api-access-fzzf5\") pod \"6463b44f-0536-4c98-964e-ffefaf92dd97\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.206815 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-catalog-content\") pod \"6463b44f-0536-4c98-964e-ffefaf92dd97\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.206939 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-utilities\") pod \"6463b44f-0536-4c98-964e-ffefaf92dd97\" (UID: \"6463b44f-0536-4c98-964e-ffefaf92dd97\") " Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.207554 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-utilities" (OuterVolumeSpecName: "utilities") pod "6463b44f-0536-4c98-964e-ffefaf92dd97" (UID: "6463b44f-0536-4c98-964e-ffefaf92dd97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.213647 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6463b44f-0536-4c98-964e-ffefaf92dd97-kube-api-access-fzzf5" (OuterVolumeSpecName: "kube-api-access-fzzf5") pod "6463b44f-0536-4c98-964e-ffefaf92dd97" (UID: "6463b44f-0536-4c98-964e-ffefaf92dd97"). InnerVolumeSpecName "kube-api-access-fzzf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.269568 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6463b44f-0536-4c98-964e-ffefaf92dd97" (UID: "6463b44f-0536-4c98-964e-ffefaf92dd97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.309697 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.309741 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzzf5\" (UniqueName: \"kubernetes.io/projected/6463b44f-0536-4c98-964e-ffefaf92dd97-kube-api-access-fzzf5\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.309761 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6463b44f-0536-4c98-964e-ffefaf92dd97-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.507343 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4t4r8"] Feb 17 17:20:08 crc kubenswrapper[4808]: I0217 17:20:08.520980 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4t4r8"] Feb 17 17:20:09 crc kubenswrapper[4808]: I0217 17:20:09.156322 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" path="/var/lib/kubelet/pods/6463b44f-0536-4c98-964e-ffefaf92dd97/volumes" Feb 17 17:20:14 crc kubenswrapper[4808]: I0217 17:20:14.146657 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:20:14 crc kubenswrapper[4808]: E0217 17:20:14.148553 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:20:20 crc kubenswrapper[4808]: E0217 17:20:20.148768 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:20:22 crc kubenswrapper[4808]: E0217 17:20:22.148974 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:20:26 crc kubenswrapper[4808]: I0217 17:20:26.146844 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:20:26 crc kubenswrapper[4808]: E0217 17:20:26.148296 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:20:27 crc kubenswrapper[4808]: I0217 17:20:27.347794 4808 generic.go:334] "Generic (PLEG): container finished" podID="1421f2cf-bbb7-4679-a249-d3233f1a590a" containerID="fce94902885db56874aa711abdba927b17899ff624af8c260483d4d779880140" exitCode=0 Feb 17 17:20:27 crc kubenswrapper[4808]: I0217 17:20:27.347883 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/crc-debug-msb9f" event={"ID":"1421f2cf-bbb7-4679-a249-d3233f1a590a","Type":"ContainerDied","Data":"fce94902885db56874aa711abdba927b17899ff624af8c260483d4d779880140"} Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.517587 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.555718 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v84wc/crc-debug-msb9f"] Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.567287 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v84wc/crc-debug-msb9f"] Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.641280 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1421f2cf-bbb7-4679-a249-d3233f1a590a-host\") pod \"1421f2cf-bbb7-4679-a249-d3233f1a590a\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.641427 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qjlq\" (UniqueName: \"kubernetes.io/projected/1421f2cf-bbb7-4679-a249-d3233f1a590a-kube-api-access-5qjlq\") pod \"1421f2cf-bbb7-4679-a249-d3233f1a590a\" (UID: \"1421f2cf-bbb7-4679-a249-d3233f1a590a\") " Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.641429 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1421f2cf-bbb7-4679-a249-d3233f1a590a-host" (OuterVolumeSpecName: "host") pod "1421f2cf-bbb7-4679-a249-d3233f1a590a" (UID: "1421f2cf-bbb7-4679-a249-d3233f1a590a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.642003 4808 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1421f2cf-bbb7-4679-a249-d3233f1a590a-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.649630 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1421f2cf-bbb7-4679-a249-d3233f1a590a-kube-api-access-5qjlq" (OuterVolumeSpecName: "kube-api-access-5qjlq") pod "1421f2cf-bbb7-4679-a249-d3233f1a590a" (UID: "1421f2cf-bbb7-4679-a249-d3233f1a590a"). InnerVolumeSpecName "kube-api-access-5qjlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:20:28 crc kubenswrapper[4808]: I0217 17:20:28.743811 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qjlq\" (UniqueName: \"kubernetes.io/projected/1421f2cf-bbb7-4679-a249-d3233f1a590a-kube-api-access-5qjlq\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.155998 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1421f2cf-bbb7-4679-a249-d3233f1a590a" path="/var/lib/kubelet/pods/1421f2cf-bbb7-4679-a249-d3233f1a590a/volumes" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.366880 4808 scope.go:117] "RemoveContainer" containerID="fce94902885db56874aa711abdba927b17899ff624af8c260483d4d779880140" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.366936 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-msb9f" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.737070 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v84wc/crc-debug-s4cnw"] Feb 17 17:20:29 crc kubenswrapper[4808]: E0217 17:20:29.737713 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1421f2cf-bbb7-4679-a249-d3233f1a590a" containerName="container-00" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.737725 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="1421f2cf-bbb7-4679-a249-d3233f1a590a" containerName="container-00" Feb 17 17:20:29 crc kubenswrapper[4808]: E0217 17:20:29.737757 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="extract-content" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.737763 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="extract-content" Feb 17 17:20:29 crc kubenswrapper[4808]: E0217 17:20:29.737778 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="extract-utilities" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.737786 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="extract-utilities" Feb 17 17:20:29 crc kubenswrapper[4808]: E0217 17:20:29.737798 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="registry-server" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.737803 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="registry-server" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.737981 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="1421f2cf-bbb7-4679-a249-d3233f1a590a" containerName="container-00" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.738000 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6463b44f-0536-4c98-964e-ffefaf92dd97" containerName="registry-server" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.738712 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.740716 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v84wc"/"default-dockercfg-f8jxd" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.866444 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxf52\" (UniqueName: \"kubernetes.io/projected/99456b7d-1910-4568-bd41-1530e3e72765-kube-api-access-fxf52\") pod \"crc-debug-s4cnw\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.866854 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99456b7d-1910-4568-bd41-1530e3e72765-host\") pod \"crc-debug-s4cnw\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.969427 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99456b7d-1910-4568-bd41-1530e3e72765-host\") pod \"crc-debug-s4cnw\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.969595 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99456b7d-1910-4568-bd41-1530e3e72765-host\") pod \"crc-debug-s4cnw\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.969700 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxf52\" (UniqueName: \"kubernetes.io/projected/99456b7d-1910-4568-bd41-1530e3e72765-kube-api-access-fxf52\") pod \"crc-debug-s4cnw\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:29 crc kubenswrapper[4808]: I0217 17:20:29.988598 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxf52\" (UniqueName: \"kubernetes.io/projected/99456b7d-1910-4568-bd41-1530e3e72765-kube-api-access-fxf52\") pod \"crc-debug-s4cnw\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:30 crc kubenswrapper[4808]: I0217 17:20:30.056226 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:30 crc kubenswrapper[4808]: I0217 17:20:30.385159 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/crc-debug-s4cnw" event={"ID":"99456b7d-1910-4568-bd41-1530e3e72765","Type":"ContainerStarted","Data":"ab59990105963838e5a53279c7bde66d50d8761853c8aa2e109846f43c7c2405"} Feb 17 17:20:31 crc kubenswrapper[4808]: I0217 17:20:31.417145 4808 generic.go:334] "Generic (PLEG): container finished" podID="99456b7d-1910-4568-bd41-1530e3e72765" containerID="886212de31c048e2a4a7d6ec1f21ce8db66db2cb601a099787b0b26295d79e07" exitCode=0 Feb 17 17:20:31 crc kubenswrapper[4808]: I0217 17:20:31.417246 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/crc-debug-s4cnw" event={"ID":"99456b7d-1910-4568-bd41-1530e3e72765","Type":"ContainerDied","Data":"886212de31c048e2a4a7d6ec1f21ce8db66db2cb601a099787b0b26295d79e07"} Feb 17 17:20:31 crc kubenswrapper[4808]: I0217 17:20:31.927146 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v84wc/crc-debug-s4cnw"] Feb 17 17:20:31 crc kubenswrapper[4808]: I0217 17:20:31.941634 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v84wc/crc-debug-s4cnw"] Feb 17 17:20:32 crc kubenswrapper[4808]: I0217 17:20:32.538875 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:32 crc kubenswrapper[4808]: I0217 17:20:32.622215 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxf52\" (UniqueName: \"kubernetes.io/projected/99456b7d-1910-4568-bd41-1530e3e72765-kube-api-access-fxf52\") pod \"99456b7d-1910-4568-bd41-1530e3e72765\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " Feb 17 17:20:32 crc kubenswrapper[4808]: I0217 17:20:32.622394 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99456b7d-1910-4568-bd41-1530e3e72765-host\") pod \"99456b7d-1910-4568-bd41-1530e3e72765\" (UID: \"99456b7d-1910-4568-bd41-1530e3e72765\") " Feb 17 17:20:32 crc kubenswrapper[4808]: I0217 17:20:32.622908 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99456b7d-1910-4568-bd41-1530e3e72765-host" (OuterVolumeSpecName: "host") pod "99456b7d-1910-4568-bd41-1530e3e72765" (UID: "99456b7d-1910-4568-bd41-1530e3e72765"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:20:32 crc kubenswrapper[4808]: I0217 17:20:32.623319 4808 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/99456b7d-1910-4568-bd41-1530e3e72765-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:32 crc kubenswrapper[4808]: I0217 17:20:32.630302 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99456b7d-1910-4568-bd41-1530e3e72765-kube-api-access-fxf52" (OuterVolumeSpecName: "kube-api-access-fxf52") pod "99456b7d-1910-4568-bd41-1530e3e72765" (UID: "99456b7d-1910-4568-bd41-1530e3e72765"). InnerVolumeSpecName "kube-api-access-fxf52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:20:32 crc kubenswrapper[4808]: I0217 17:20:32.726019 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxf52\" (UniqueName: \"kubernetes.io/projected/99456b7d-1910-4568-bd41-1530e3e72765-kube-api-access-fxf52\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.148008 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.157513 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99456b7d-1910-4568-bd41-1530e3e72765" path="/var/lib/kubelet/pods/99456b7d-1910-4568-bd41-1530e3e72765/volumes" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.213519 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v84wc/crc-debug-8xw5k"] Feb 17 17:20:33 crc kubenswrapper[4808]: E0217 17:20:33.214019 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99456b7d-1910-4568-bd41-1530e3e72765" containerName="container-00" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.214043 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="99456b7d-1910-4568-bd41-1530e3e72765" containerName="container-00" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.214367 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="99456b7d-1910-4568-bd41-1530e3e72765" containerName="container-00" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.215247 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:33 crc kubenswrapper[4808]: E0217 17:20:33.275635 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:20:33 crc kubenswrapper[4808]: E0217 17:20:33.275705 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:20:33 crc kubenswrapper[4808]: E0217 17:20:33.275853 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:20:33 crc kubenswrapper[4808]: E0217 17:20:33.277036 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.338130 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c88gf\" (UniqueName: \"kubernetes.io/projected/fe1f8ccd-1720-43d2-b334-f9dde62e0972-kube-api-access-c88gf\") pod \"crc-debug-8xw5k\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.338273 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1f8ccd-1720-43d2-b334-f9dde62e0972-host\") pod \"crc-debug-8xw5k\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.436206 4808 scope.go:117] "RemoveContainer" containerID="886212de31c048e2a4a7d6ec1f21ce8db66db2cb601a099787b0b26295d79e07" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.436244 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-s4cnw" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.439920 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c88gf\" (UniqueName: \"kubernetes.io/projected/fe1f8ccd-1720-43d2-b334-f9dde62e0972-kube-api-access-c88gf\") pod \"crc-debug-8xw5k\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.440004 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1f8ccd-1720-43d2-b334-f9dde62e0972-host\") pod \"crc-debug-8xw5k\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.440210 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1f8ccd-1720-43d2-b334-f9dde62e0972-host\") pod \"crc-debug-8xw5k\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.462318 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c88gf\" (UniqueName: \"kubernetes.io/projected/fe1f8ccd-1720-43d2-b334-f9dde62e0972-kube-api-access-c88gf\") pod \"crc-debug-8xw5k\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:33 crc kubenswrapper[4808]: I0217 17:20:33.539014 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:34 crc kubenswrapper[4808]: I0217 17:20:34.448794 4808 generic.go:334] "Generic (PLEG): container finished" podID="fe1f8ccd-1720-43d2-b334-f9dde62e0972" containerID="cd15dbf76ff4b1429591d975b57babb4c210c92b9b9c36cf667e623e8c29cf61" exitCode=0 Feb 17 17:20:34 crc kubenswrapper[4808]: I0217 17:20:34.449329 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/crc-debug-8xw5k" event={"ID":"fe1f8ccd-1720-43d2-b334-f9dde62e0972","Type":"ContainerDied","Data":"cd15dbf76ff4b1429591d975b57babb4c210c92b9b9c36cf667e623e8c29cf61"} Feb 17 17:20:34 crc kubenswrapper[4808]: I0217 17:20:34.449361 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/crc-debug-8xw5k" event={"ID":"fe1f8ccd-1720-43d2-b334-f9dde62e0972","Type":"ContainerStarted","Data":"fc9bd2d6092fb23dae133918bd69420618835ebb59416af2542dadb082ea10ee"} Feb 17 17:20:34 crc kubenswrapper[4808]: I0217 17:20:34.493089 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v84wc/crc-debug-8xw5k"] Feb 17 17:20:34 crc kubenswrapper[4808]: I0217 17:20:34.501790 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v84wc/crc-debug-8xw5k"] Feb 17 17:20:35 crc kubenswrapper[4808]: E0217 17:20:35.147420 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:20:35 crc kubenswrapper[4808]: I0217 17:20:35.610103 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:35 crc kubenswrapper[4808]: I0217 17:20:35.703992 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1f8ccd-1720-43d2-b334-f9dde62e0972-host\") pod \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " Feb 17 17:20:35 crc kubenswrapper[4808]: I0217 17:20:35.704260 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe1f8ccd-1720-43d2-b334-f9dde62e0972-host" (OuterVolumeSpecName: "host") pod "fe1f8ccd-1720-43d2-b334-f9dde62e0972" (UID: "fe1f8ccd-1720-43d2-b334-f9dde62e0972"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 17 17:20:35 crc kubenswrapper[4808]: I0217 17:20:35.704298 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c88gf\" (UniqueName: \"kubernetes.io/projected/fe1f8ccd-1720-43d2-b334-f9dde62e0972-kube-api-access-c88gf\") pod \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\" (UID: \"fe1f8ccd-1720-43d2-b334-f9dde62e0972\") " Feb 17 17:20:35 crc kubenswrapper[4808]: I0217 17:20:35.704886 4808 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fe1f8ccd-1720-43d2-b334-f9dde62e0972-host\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:35 crc kubenswrapper[4808]: I0217 17:20:35.716200 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe1f8ccd-1720-43d2-b334-f9dde62e0972-kube-api-access-c88gf" (OuterVolumeSpecName: "kube-api-access-c88gf") pod "fe1f8ccd-1720-43d2-b334-f9dde62e0972" (UID: "fe1f8ccd-1720-43d2-b334-f9dde62e0972"). InnerVolumeSpecName "kube-api-access-c88gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:20:35 crc kubenswrapper[4808]: I0217 17:20:35.807420 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c88gf\" (UniqueName: \"kubernetes.io/projected/fe1f8ccd-1720-43d2-b334-f9dde62e0972-kube-api-access-c88gf\") on node \"crc\" DevicePath \"\"" Feb 17 17:20:36 crc kubenswrapper[4808]: I0217 17:20:36.502051 4808 scope.go:117] "RemoveContainer" containerID="cd15dbf76ff4b1429591d975b57babb4c210c92b9b9c36cf667e623e8c29cf61" Feb 17 17:20:36 crc kubenswrapper[4808]: I0217 17:20:36.502345 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/crc-debug-8xw5k" Feb 17 17:20:37 crc kubenswrapper[4808]: I0217 17:20:37.161939 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe1f8ccd-1720-43d2-b334-f9dde62e0972" path="/var/lib/kubelet/pods/fe1f8ccd-1720-43d2-b334-f9dde62e0972/volumes" Feb 17 17:20:40 crc kubenswrapper[4808]: I0217 17:20:40.146085 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:20:40 crc kubenswrapper[4808]: E0217 17:20:40.147067 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:20:47 crc kubenswrapper[4808]: E0217 17:20:47.154335 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:20:48 crc kubenswrapper[4808]: E0217 17:20:48.146998 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:20:52 crc kubenswrapper[4808]: I0217 17:20:52.146269 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:20:52 crc kubenswrapper[4808]: E0217 17:20:52.147824 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:20:59 crc kubenswrapper[4808]: E0217 17:20:59.275040 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:20:59 crc kubenswrapper[4808]: E0217 17:20:59.275722 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:20:59 crc kubenswrapper[4808]: E0217 17:20:59.275908 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:20:59 crc kubenswrapper[4808]: E0217 17:20:59.277191 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:21:02 crc kubenswrapper[4808]: E0217 17:21:02.148459 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:21:04 crc kubenswrapper[4808]: I0217 17:21:04.146292 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:21:04 crc kubenswrapper[4808]: E0217 17:21:04.146934 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.511905 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g2wvv"] Feb 17 17:21:10 crc kubenswrapper[4808]: E0217 17:21:10.513031 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe1f8ccd-1720-43d2-b334-f9dde62e0972" containerName="container-00" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.513047 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe1f8ccd-1720-43d2-b334-f9dde62e0972" containerName="container-00" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.513287 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe1f8ccd-1720-43d2-b334-f9dde62e0972" containerName="container-00" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.514976 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.541327 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2wvv"] Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.604908 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-utilities\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.605308 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-catalog-content\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.605386 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbnrk\" (UniqueName: \"kubernetes.io/projected/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-kube-api-access-dbnrk\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.708132 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-catalog-content\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.708188 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbnrk\" (UniqueName: \"kubernetes.io/projected/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-kube-api-access-dbnrk\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.708312 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-utilities\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.708930 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-catalog-content\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.710060 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-utilities\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:10 crc kubenswrapper[4808]: I0217 17:21:10.878094 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbnrk\" (UniqueName: \"kubernetes.io/projected/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-kube-api-access-dbnrk\") pod \"redhat-operators-g2wvv\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:11 crc kubenswrapper[4808]: I0217 17:21:11.161547 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:11 crc kubenswrapper[4808]: I0217 17:21:11.721816 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g2wvv"] Feb 17 17:21:11 crc kubenswrapper[4808]: I0217 17:21:11.951544 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2wvv" event={"ID":"9d9a64bc-8829-4eb8-b992-92f15c06c5cd","Type":"ContainerStarted","Data":"fba4b2968632d2bd4cdd0c26e698a48d92c3645d42d2a965a77a8846ddad4b21"} Feb 17 17:21:12 crc kubenswrapper[4808]: I0217 17:21:12.963169 4808 generic.go:334] "Generic (PLEG): container finished" podID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerID="bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076" exitCode=0 Feb 17 17:21:12 crc kubenswrapper[4808]: I0217 17:21:12.963227 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2wvv" event={"ID":"9d9a64bc-8829-4eb8-b992-92f15c06c5cd","Type":"ContainerDied","Data":"bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076"} Feb 17 17:21:13 crc kubenswrapper[4808]: E0217 17:21:13.147153 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:21:15 crc kubenswrapper[4808]: I0217 17:21:15.008115 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2wvv" event={"ID":"9d9a64bc-8829-4eb8-b992-92f15c06c5cd","Type":"ContainerStarted","Data":"486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f"} Feb 17 17:21:16 crc kubenswrapper[4808]: I0217 17:21:16.146646 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:21:16 crc kubenswrapper[4808]: E0217 17:21:16.147314 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:21:16 crc kubenswrapper[4808]: E0217 17:21:16.147812 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:21:21 crc kubenswrapper[4808]: I0217 17:21:21.151802 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_56f9931d-b010-4282-9068-16b2e4e4b247/init-config-reloader/0.log" Feb 17 17:21:21 crc kubenswrapper[4808]: I0217 17:21:21.706124 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_56f9931d-b010-4282-9068-16b2e4e4b247/config-reloader/0.log" Feb 17 17:21:21 crc kubenswrapper[4808]: I0217 17:21:21.715387 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_56f9931d-b010-4282-9068-16b2e4e4b247/alertmanager/0.log" Feb 17 17:21:21 crc kubenswrapper[4808]: I0217 17:21:21.879112 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_56f9931d-b010-4282-9068-16b2e4e4b247/init-config-reloader/0.log" Feb 17 17:21:21 crc kubenswrapper[4808]: I0217 17:21:21.952430 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f445fb886-lsqq4_a9bf13d7-3430-4818-b8fc-239796570b6c/barbican-api/0.log" Feb 17 17:21:21 crc kubenswrapper[4808]: I0217 17:21:21.990167 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f445fb886-lsqq4_a9bf13d7-3430-4818-b8fc-239796570b6c/barbican-api-log/0.log" Feb 17 17:21:22 crc kubenswrapper[4808]: I0217 17:21:22.114406 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6d78867d94-7lhqs_990b124d-3558-48ad-87f8-503580da5cc7/barbican-keystone-listener/0.log" Feb 17 17:21:22 crc kubenswrapper[4808]: I0217 17:21:22.226968 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6d78867d94-7lhqs_990b124d-3558-48ad-87f8-503580da5cc7/barbican-keystone-listener-log/0.log" Feb 17 17:21:22 crc kubenswrapper[4808]: I0217 17:21:22.302298 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55f6d995c5-hnz4n_a0db6993-f3e7-4aa7-b5cc-1b848a15b56c/barbican-worker/0.log" Feb 17 17:21:22 crc kubenswrapper[4808]: I0217 17:21:22.357003 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55f6d995c5-hnz4n_a0db6993-f3e7-4aa7-b5cc-1b848a15b56c/barbican-worker-log/0.log" Feb 17 17:21:22 crc kubenswrapper[4808]: I0217 17:21:22.608337 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-vwl2g_e4a30af7-342e-49c0-8e89-c38f11b7cc63/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.001642 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2876084b-7055-449d-9ddb-447d3a515d80/ceilometer-notification-agent/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.048905 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2876084b-7055-449d-9ddb-447d3a515d80/proxy-httpd/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.279416 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2876084b-7055-449d-9ddb-447d3a515d80/sg-core/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.316157 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b221adbf-8d08-4f9c-8bb2-578555a453df/cinder-api/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.381303 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_b221adbf-8d08-4f9c-8bb2-578555a453df/cinder-api-log/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.610822 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fce98890-1299-4c07-8a3a-739241f0bf0d/cinder-scheduler/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.639719 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_fce98890-1299-4c07-8a3a-739241f0bf0d/probe/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.901779 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_b35dce7b-8ffe-4981-8376-5db5a01dcf77/cloudkitty-api-log/0.log" Feb 17 17:21:23 crc kubenswrapper[4808]: I0217 17:21:23.905435 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_b35dce7b-8ffe-4981-8376-5db5a01dcf77/cloudkitty-api/0.log" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.174244 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_c850b5fe-4c28-4136-8136-fae52e38371b/loki-compactor/0.log" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.301145 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-zfhfg_4fa85572-1552-4a27-8974-b1e2d376167c/loki-distributor/0.log" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.465036 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-77rbq_c4fa7a6a-b7fc-464c-b529-dcf8d20de97e/gateway/0.log" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.568714 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-mdlhq_dc9fa7d9-5340-4cb0-adbb-980e7ae2acb0/gateway/0.log" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.723052 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f8d96"] Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.725380 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.737563 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_d6dbebd3-2b7c-4afa-8937-5c47b749e8b0/loki-index-gateway/0.log" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.770825 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8d96"] Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.840255 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_c7929d5b-e791-419e-8039-50cc9f8202f2/loki-ingester/0.log" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.852902 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-utilities\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.852958 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-catalog-content\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.853041 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcgzp\" (UniqueName: \"kubernetes.io/projected/40119af6-a3e0-44d6-abc8-df39c96836ac-kube-api-access-pcgzp\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.954894 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcgzp\" (UniqueName: \"kubernetes.io/projected/40119af6-a3e0-44d6-abc8-df39c96836ac-kube-api-access-pcgzp\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.955117 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-utilities\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.955144 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-catalog-content\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.955915 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-catalog-content\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.955946 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-utilities\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:24 crc kubenswrapper[4808]: I0217 17:21:24.981681 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcgzp\" (UniqueName: \"kubernetes.io/projected/40119af6-a3e0-44d6-abc8-df39c96836ac-kube-api-access-pcgzp\") pod \"redhat-marketplace-f8d96\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.051716 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-pkj8k_6df15762-0f06-48ff-89bf-00f5118c6ced/loki-querier/0.log" Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.067148 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.123400 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-52cj4_be29c259-d619-4326-b866-2a8560d9b818/loki-query-frontend/0.log" Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.458306 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-mqnbz_3d16d4be-1ab3-4261-97a7-054701cf9dba/init/0.log" Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.651886 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-mqnbz_3d16d4be-1ab3-4261-97a7-054701cf9dba/init/0.log" Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.738276 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8d96"] Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.773353 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-mqnbz_3d16d4be-1ab3-4261-97a7-054701cf9dba/dnsmasq-dns/0.log" Feb 17 17:21:25 crc kubenswrapper[4808]: I0217 17:21:25.850709 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-9nkdz_486d1a55-6cee-4d24-ab2b-5c5c61c6d3d3/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.127323 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hsdg8_c51156c6-7d2b-4871-9ae0-963c4eb67454/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:26 crc kubenswrapper[4808]: E0217 17:21:26.153634 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.192533 4808 generic.go:334] "Generic (PLEG): container finished" podID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerID="eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea" exitCode=0 Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.192602 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8d96" event={"ID":"40119af6-a3e0-44d6-abc8-df39c96836ac","Type":"ContainerDied","Data":"eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea"} Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.192628 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8d96" event={"ID":"40119af6-a3e0-44d6-abc8-df39c96836ac","Type":"ContainerStarted","Data":"2536e14a994b64f27af984baacbd8fd7c12099545e13e6a5747da97bd5cf5e03"} Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.452448 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-n8rxl_8b75e2b3-ab6a-4088-897b-7a11da62a654/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.557955 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-pmbdv_d178dfcd-66d8-40ba-b740-909fe6e081ac/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:26 crc kubenswrapper[4808]: E0217 17:21:26.605181 4808 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d9a64bc_8829_4eb8_b992_92f15c06c5cd.slice/crio-conmon-486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f.scope\": RecentStats: unable to find data in memory cache]" Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.756934 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-sjckt_2084629b-ffd4-4f5e-8db7-070d4a08dd8e/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:26 crc kubenswrapper[4808]: I0217 17:21:26.868522 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-tjd7w_11efc7ce-322d-4bfe-95ad-c84d779a80d8/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.063077 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-zzjwk_6fa90ca1-9ae4-4cce-a41f-640f2629ccfd/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.153923 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:21:27 crc kubenswrapper[4808]: E0217 17:21:27.154269 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.168990 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d5dbe689-5e11-4832-84c8-d603c08a23e2/glance-httpd/0.log" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.205315 4808 generic.go:334] "Generic (PLEG): container finished" podID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerID="486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f" exitCode=0 Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.205666 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2wvv" event={"ID":"9d9a64bc-8829-4eb8-b992-92f15c06c5cd","Type":"ContainerDied","Data":"486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f"} Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.316259 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d5dbe689-5e11-4832-84c8-d603c08a23e2/glance-log/0.log" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.358807 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b59528d2-0bad-4c66-9971-222dcaf72184/glance-httpd/0.log" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.488384 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b59528d2-0bad-4c66-9971-222dcaf72184/glance-log/0.log" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.732456 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-679dfcbbb9-npbsd_8a521aa0-4048-49a0-b6c1-32e07f349ac5/keystone-api/0.log" Feb 17 17:21:27 crc kubenswrapper[4808]: I0217 17:21:27.772551 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29522461-f5wx2_d443f775-9b53-4aaf-bcda-68aed8d88e84/keystone-cron/0.log" Feb 17 17:21:28 crc kubenswrapper[4808]: I0217 17:21:28.031473 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_65ea994e-22f1-4dbf-8b79-8810148fad94/kube-state-metrics/0.log" Feb 17 17:21:28 crc kubenswrapper[4808]: I0217 17:21:28.216102 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2wvv" event={"ID":"9d9a64bc-8829-4eb8-b992-92f15c06c5cd","Type":"ContainerStarted","Data":"0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94"} Feb 17 17:21:28 crc kubenswrapper[4808]: I0217 17:21:28.218078 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8d96" event={"ID":"40119af6-a3e0-44d6-abc8-df39c96836ac","Type":"ContainerStarted","Data":"f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169"} Feb 17 17:21:28 crc kubenswrapper[4808]: I0217 17:21:28.245373 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g2wvv" podStartSLOduration=3.500940941 podStartE2EDuration="18.245348783s" podCreationTimestamp="2026-02-17 17:21:10 +0000 UTC" firstStartedPulling="2026-02-17 17:21:12.9656657 +0000 UTC m=+5236.482024773" lastFinishedPulling="2026-02-17 17:21:27.710073542 +0000 UTC m=+5251.226432615" observedRunningTime="2026-02-17 17:21:28.239362872 +0000 UTC m=+5251.755721955" watchObservedRunningTime="2026-02-17 17:21:28.245348783 +0000 UTC m=+5251.761707856" Feb 17 17:21:28 crc kubenswrapper[4808]: I0217 17:21:28.357257 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c6489dbc7-2ddnw_b7e54d61-1bf6-41ae-b885-7e6448d351a5/neutron-api/0.log" Feb 17 17:21:28 crc kubenswrapper[4808]: I0217 17:21:28.404082 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c6489dbc7-2ddnw_b7e54d61-1bf6-41ae-b885-7e6448d351a5/neutron-httpd/0.log" Feb 17 17:21:28 crc kubenswrapper[4808]: I0217 17:21:28.944994 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e91a7ada-9f3c-4a6c-a56e-355538c9a868/nova-api-log/0.log" Feb 17 17:21:29 crc kubenswrapper[4808]: I0217 17:21:29.414566 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e91a7ada-9f3c-4a6c-a56e-355538c9a868/nova-api-api/0.log" Feb 17 17:21:29 crc kubenswrapper[4808]: I0217 17:21:29.637183 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_fd596411-c54c-4a8a-9b6a-420b6ab3c9ff/nova-cell0-conductor-conductor/0.log" Feb 17 17:21:29 crc kubenswrapper[4808]: I0217 17:21:29.773935 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1c30e340-2218-46f6-97d6-aaf96a54d84d/nova-cell1-conductor-conductor/0.log" Feb 17 17:21:30 crc kubenswrapper[4808]: I0217 17:21:30.122742 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_e1acfe51-1173-4ce1-a645-d757d30e3312/nova-cell1-novncproxy-novncproxy/0.log" Feb 17 17:21:30 crc kubenswrapper[4808]: E0217 17:21:30.151211 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:21:30 crc kubenswrapper[4808]: I0217 17:21:30.244195 4808 generic.go:334] "Generic (PLEG): container finished" podID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerID="f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169" exitCode=0 Feb 17 17:21:30 crc kubenswrapper[4808]: I0217 17:21:30.244247 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8d96" event={"ID":"40119af6-a3e0-44d6-abc8-df39c96836ac","Type":"ContainerDied","Data":"f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169"} Feb 17 17:21:30 crc kubenswrapper[4808]: I0217 17:21:30.273035 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fbdf54f1-8cfa-46c6-addd-bda126337c05/nova-metadata-log/0.log" Feb 17 17:21:30 crc kubenswrapper[4808]: I0217 17:21:30.876091 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4481dde9-062b-48d4-ae35-b6fa96ccf94e/nova-scheduler-scheduler/0.log" Feb 17 17:21:31 crc kubenswrapper[4808]: I0217 17:21:31.165633 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:31 crc kubenswrapper[4808]: I0217 17:21:31.165708 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:31 crc kubenswrapper[4808]: I0217 17:21:31.354525 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ade81c90-5cdf-45d4-ad2f-52a3514e1596/mysql-bootstrap/0.log" Feb 17 17:21:31 crc kubenswrapper[4808]: I0217 17:21:31.401339 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ade81c90-5cdf-45d4-ad2f-52a3514e1596/mysql-bootstrap/0.log" Feb 17 17:21:31 crc kubenswrapper[4808]: I0217 17:21:31.597223 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ade81c90-5cdf-45d4-ad2f-52a3514e1596/galera/0.log" Feb 17 17:21:31 crc kubenswrapper[4808]: I0217 17:21:31.950985 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a020d38c-5e24-4266-96dc-9050e4d82f46/mysql-bootstrap/0.log" Feb 17 17:21:32 crc kubenswrapper[4808]: I0217 17:21:32.216011 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g2wvv" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="registry-server" probeResult="failure" output=< Feb 17 17:21:32 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:21:32 crc kubenswrapper[4808]: > Feb 17 17:21:32 crc kubenswrapper[4808]: I0217 17:21:32.263996 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8d96" event={"ID":"40119af6-a3e0-44d6-abc8-df39c96836ac","Type":"ContainerStarted","Data":"34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4"} Feb 17 17:21:32 crc kubenswrapper[4808]: I0217 17:21:32.291811 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f8d96" podStartSLOduration=3.634446022 podStartE2EDuration="8.291792951s" podCreationTimestamp="2026-02-17 17:21:24 +0000 UTC" firstStartedPulling="2026-02-17 17:21:26.203973879 +0000 UTC m=+5249.720332952" lastFinishedPulling="2026-02-17 17:21:30.861320808 +0000 UTC m=+5254.377679881" observedRunningTime="2026-02-17 17:21:32.289004475 +0000 UTC m=+5255.805363548" watchObservedRunningTime="2026-02-17 17:21:32.291792951 +0000 UTC m=+5255.808152024" Feb 17 17:21:32 crc kubenswrapper[4808]: I0217 17:21:32.608431 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_14f49c04-388f-4eeb-be54-cbf3713606db/cloudkitty-proc/0.log" Feb 17 17:21:32 crc kubenswrapper[4808]: I0217 17:21:32.736413 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a020d38c-5e24-4266-96dc-9050e4d82f46/mysql-bootstrap/0.log" Feb 17 17:21:32 crc kubenswrapper[4808]: I0217 17:21:32.791790 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a020d38c-5e24-4266-96dc-9050e4d82f46/galera/0.log" Feb 17 17:21:33 crc kubenswrapper[4808]: I0217 17:21:33.249369 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5ce308e0-2ba0-41ae-8760-e749c8d04130/openstackclient/0.log" Feb 17 17:21:33 crc kubenswrapper[4808]: I0217 17:21:33.323351 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fbdf54f1-8cfa-46c6-addd-bda126337c05/nova-metadata-metadata/0.log" Feb 17 17:21:33 crc kubenswrapper[4808]: I0217 17:21:33.379901 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qh29t_52d5a09f-33dd-49cf-9a31-a21d73a43b86/openstack-network-exporter/0.log" Feb 17 17:21:33 crc kubenswrapper[4808]: I0217 17:21:33.589542 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wkzp6_30b7fc5a-690b-4ac6-b37c-9c1ec074f962/ovsdb-server-init/0.log" Feb 17 17:21:33 crc kubenswrapper[4808]: I0217 17:21:33.694179 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wkzp6_30b7fc5a-690b-4ac6-b37c-9c1ec074f962/ovsdb-server-init/0.log" Feb 17 17:21:33 crc kubenswrapper[4808]: I0217 17:21:33.798208 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wkzp6_30b7fc5a-690b-4ac6-b37c-9c1ec074f962/ovs-vswitchd/0.log" Feb 17 17:21:33 crc kubenswrapper[4808]: I0217 17:21:33.869602 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-wkzp6_30b7fc5a-690b-4ac6-b37c-9c1ec074f962/ovsdb-server/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.025677 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-pfcvm_8a76a2ff-ed1a-4279-898c-54e85973f024/ovn-controller/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.133889 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_79b7a04d-f324-40d0-ad2b-370cfef43858/openstack-network-exporter/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.245857 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_79b7a04d-f324-40d0-ad2b-370cfef43858/ovn-northd/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.410794 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8c434a76-4dcf-4c69-aefa-5cda8b120a26/openstack-network-exporter/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.464159 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8c434a76-4dcf-4c69-aefa-5cda8b120a26/ovsdbserver-nb/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.567148 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_220c5de1-b4bf-454c-b013-17d78d86cca3/openstack-network-exporter/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.649343 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_220c5de1-b4bf-454c-b013-17d78d86cca3/ovsdbserver-sb/0.log" Feb 17 17:21:34 crc kubenswrapper[4808]: I0217 17:21:34.913328 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-76b995d5cb-7xs25_ab7f0766-47a0-4616-b6dc-32957d59188a/placement-api/0.log" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.005001 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-76b995d5cb-7xs25_ab7f0766-47a0-4616-b6dc-32957d59188a/placement-log/0.log" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.067325 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.067378 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.117016 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dadd7e91-13f0-4ba2-9f87-ad057567a56d/init-config-reloader/0.log" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.279093 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dadd7e91-13f0-4ba2-9f87-ad057567a56d/init-config-reloader/0.log" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.348166 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dadd7e91-13f0-4ba2-9f87-ad057567a56d/config-reloader/0.log" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.365855 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dadd7e91-13f0-4ba2-9f87-ad057567a56d/prometheus/0.log" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.366951 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_dadd7e91-13f0-4ba2-9f87-ad057567a56d/thanos-sidecar/0.log" Feb 17 17:21:35 crc kubenswrapper[4808]: I0217 17:21:35.584532 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9da8d67e-00c6-4ba1-a08b-09c5653d93fd/setup-container/0.log" Feb 17 17:21:36 crc kubenswrapper[4808]: I0217 17:21:36.119971 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-f8d96" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="registry-server" probeResult="failure" output=< Feb 17 17:21:36 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:21:36 crc kubenswrapper[4808]: > Feb 17 17:21:36 crc kubenswrapper[4808]: I0217 17:21:36.183778 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9da8d67e-00c6-4ba1-a08b-09c5653d93fd/setup-container/0.log" Feb 17 17:21:36 crc kubenswrapper[4808]: I0217 17:21:36.192439 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_357e5513-bef7-45cc-b62f-072a161ccce3/setup-container/0.log" Feb 17 17:21:36 crc kubenswrapper[4808]: I0217 17:21:36.301658 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_9da8d67e-00c6-4ba1-a08b-09c5653d93fd/rabbitmq/0.log" Feb 17 17:21:36 crc kubenswrapper[4808]: I0217 17:21:36.653817 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_357e5513-bef7-45cc-b62f-072a161ccce3/setup-container/0.log" Feb 17 17:21:36 crc kubenswrapper[4808]: I0217 17:21:36.717479 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_357e5513-bef7-45cc-b62f-072a161ccce3/rabbitmq/0.log" Feb 17 17:21:36 crc kubenswrapper[4808]: I0217 17:21:36.846723 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-8pfvq_404291d9-a172-4a9a-8a0e-2f2514ce06ff/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:37 crc kubenswrapper[4808]: I0217 17:21:37.294341 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-4n9tl_785a49f6-7a06-4787-a829-fc9956730c15/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 17 17:21:37 crc kubenswrapper[4808]: I0217 17:21:37.341192 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-dcfbdc547-54spv_45097e1f-e6c7-40c1-8338-3f1ac506c3fe/proxy-httpd/0.log" Feb 17 17:21:37 crc kubenswrapper[4808]: I0217 17:21:37.544346 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-dcfbdc547-54spv_45097e1f-e6c7-40c1-8338-3f1ac506c3fe/proxy-server/0.log" Feb 17 17:21:37 crc kubenswrapper[4808]: I0217 17:21:37.577613 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-qg65w_eb2856a7-c37a-4ecc-a4a2-c49864240315/swift-ring-rebalance/0.log" Feb 17 17:21:37 crc kubenswrapper[4808]: I0217 17:21:37.967293 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/account-reaper/0.log" Feb 17 17:21:37 crc kubenswrapper[4808]: I0217 17:21:37.983851 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/account-auditor/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.066217 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/account-replicator/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.103135 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/account-server/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.145537 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:21:38 crc kubenswrapper[4808]: E0217 17:21:38.145904 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.307878 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/container-server/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.331842 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/container-auditor/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.352124 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/container-replicator/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.431654 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/container-updater/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.518507 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/object-expirer/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.647645 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/object-auditor/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.681465 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/object-server/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.700173 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/object-replicator/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.844668 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/object-updater/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.869422 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/rsync/0.log" Feb 17 17:21:38 crc kubenswrapper[4808]: I0217 17:21:38.944653 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8f52ebe4-f003-4d0b-8539-1d406db95b2f/swift-recon-cron/0.log" Feb 17 17:21:41 crc kubenswrapper[4808]: E0217 17:21:41.148014 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:21:42 crc kubenswrapper[4808]: I0217 17:21:42.290134 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g2wvv" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="registry-server" probeResult="failure" output=< Feb 17 17:21:42 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:21:42 crc kubenswrapper[4808]: > Feb 17 17:21:43 crc kubenswrapper[4808]: E0217 17:21:43.149039 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:21:43 crc kubenswrapper[4808]: I0217 17:21:43.182975 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_2ea38754-3b00-4bcb-93d9-28b60dda0e0a/memcached/0.log" Feb 17 17:21:45 crc kubenswrapper[4808]: I0217 17:21:45.122342 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:45 crc kubenswrapper[4808]: I0217 17:21:45.176022 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:45 crc kubenswrapper[4808]: I0217 17:21:45.359704 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8d96"] Feb 17 17:21:46 crc kubenswrapper[4808]: I0217 17:21:46.417305 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f8d96" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="registry-server" containerID="cri-o://34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4" gracePeriod=2 Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.161858 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.315133 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-catalog-content\") pod \"40119af6-a3e0-44d6-abc8-df39c96836ac\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.315198 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-utilities\") pod \"40119af6-a3e0-44d6-abc8-df39c96836ac\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.315406 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcgzp\" (UniqueName: \"kubernetes.io/projected/40119af6-a3e0-44d6-abc8-df39c96836ac-kube-api-access-pcgzp\") pod \"40119af6-a3e0-44d6-abc8-df39c96836ac\" (UID: \"40119af6-a3e0-44d6-abc8-df39c96836ac\") " Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.317962 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-utilities" (OuterVolumeSpecName: "utilities") pod "40119af6-a3e0-44d6-abc8-df39c96836ac" (UID: "40119af6-a3e0-44d6-abc8-df39c96836ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.327541 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40119af6-a3e0-44d6-abc8-df39c96836ac-kube-api-access-pcgzp" (OuterVolumeSpecName: "kube-api-access-pcgzp") pod "40119af6-a3e0-44d6-abc8-df39c96836ac" (UID: "40119af6-a3e0-44d6-abc8-df39c96836ac"). InnerVolumeSpecName "kube-api-access-pcgzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.347489 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40119af6-a3e0-44d6-abc8-df39c96836ac" (UID: "40119af6-a3e0-44d6-abc8-df39c96836ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.418291 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.418332 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40119af6-a3e0-44d6-abc8-df39c96836ac-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.418344 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcgzp\" (UniqueName: \"kubernetes.io/projected/40119af6-a3e0-44d6-abc8-df39c96836ac-kube-api-access-pcgzp\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.430685 4808 generic.go:334] "Generic (PLEG): container finished" podID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerID="34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4" exitCode=0 Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.430737 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8d96" event={"ID":"40119af6-a3e0-44d6-abc8-df39c96836ac","Type":"ContainerDied","Data":"34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4"} Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.430782 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f8d96" event={"ID":"40119af6-a3e0-44d6-abc8-df39c96836ac","Type":"ContainerDied","Data":"2536e14a994b64f27af984baacbd8fd7c12099545e13e6a5747da97bd5cf5e03"} Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.430788 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f8d96" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.430801 4808 scope.go:117] "RemoveContainer" containerID="34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.463027 4808 scope.go:117] "RemoveContainer" containerID="f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.481305 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8d96"] Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.503542 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f8d96"] Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.508768 4808 scope.go:117] "RemoveContainer" containerID="eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.547811 4808 scope.go:117] "RemoveContainer" containerID="34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4" Feb 17 17:21:47 crc kubenswrapper[4808]: E0217 17:21:47.548441 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4\": container with ID starting with 34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4 not found: ID does not exist" containerID="34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.548479 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4"} err="failed to get container status \"34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4\": rpc error: code = NotFound desc = could not find container \"34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4\": container with ID starting with 34cf12a8516fa96f211bf0ade4a15eb8a53165aaf5fa12f237f1539bcdae53c4 not found: ID does not exist" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.548505 4808 scope.go:117] "RemoveContainer" containerID="f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169" Feb 17 17:21:47 crc kubenswrapper[4808]: E0217 17:21:47.549137 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169\": container with ID starting with f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169 not found: ID does not exist" containerID="f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.549163 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169"} err="failed to get container status \"f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169\": rpc error: code = NotFound desc = could not find container \"f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169\": container with ID starting with f52e1028ee668631d7d301879c6552f478f86d9433b0f76259e2b4091453e169 not found: ID does not exist" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.549182 4808 scope.go:117] "RemoveContainer" containerID="eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea" Feb 17 17:21:47 crc kubenswrapper[4808]: E0217 17:21:47.549463 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea\": container with ID starting with eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea not found: ID does not exist" containerID="eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea" Feb 17 17:21:47 crc kubenswrapper[4808]: I0217 17:21:47.549608 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea"} err="failed to get container status \"eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea\": rpc error: code = NotFound desc = could not find container \"eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea\": container with ID starting with eca172e38f749572103f9af3900358585716634266e768829cda8d4d2cf5fcea not found: ID does not exist" Feb 17 17:21:49 crc kubenswrapper[4808]: I0217 17:21:49.158931 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" path="/var/lib/kubelet/pods/40119af6-a3e0-44d6-abc8-df39c96836ac/volumes" Feb 17 17:21:51 crc kubenswrapper[4808]: I0217 17:21:51.146130 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:21:51 crc kubenswrapper[4808]: E0217 17:21:51.146786 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:21:51 crc kubenswrapper[4808]: I0217 17:21:51.215527 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:51 crc kubenswrapper[4808]: I0217 17:21:51.287284 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:51 crc kubenswrapper[4808]: I0217 17:21:51.449742 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g2wvv"] Feb 17 17:21:52 crc kubenswrapper[4808]: E0217 17:21:52.148535 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:21:52 crc kubenswrapper[4808]: I0217 17:21:52.482103 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g2wvv" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="registry-server" containerID="cri-o://0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94" gracePeriod=2 Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.045830 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.142054 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-utilities\") pod \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.142153 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbnrk\" (UniqueName: \"kubernetes.io/projected/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-kube-api-access-dbnrk\") pod \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.142291 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-catalog-content\") pod \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\" (UID: \"9d9a64bc-8829-4eb8-b992-92f15c06c5cd\") " Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.143172 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-utilities" (OuterVolumeSpecName: "utilities") pod "9d9a64bc-8829-4eb8-b992-92f15c06c5cd" (UID: "9d9a64bc-8829-4eb8-b992-92f15c06c5cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.151793 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-kube-api-access-dbnrk" (OuterVolumeSpecName: "kube-api-access-dbnrk") pod "9d9a64bc-8829-4eb8-b992-92f15c06c5cd" (UID: "9d9a64bc-8829-4eb8-b992-92f15c06c5cd"). InnerVolumeSpecName "kube-api-access-dbnrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.245128 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.245155 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbnrk\" (UniqueName: \"kubernetes.io/projected/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-kube-api-access-dbnrk\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.299759 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d9a64bc-8829-4eb8-b992-92f15c06c5cd" (UID: "9d9a64bc-8829-4eb8-b992-92f15c06c5cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.347720 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d9a64bc-8829-4eb8-b992-92f15c06c5cd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.495119 4808 generic.go:334] "Generic (PLEG): container finished" podID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerID="0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94" exitCode=0 Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.495174 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2wvv" event={"ID":"9d9a64bc-8829-4eb8-b992-92f15c06c5cd","Type":"ContainerDied","Data":"0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94"} Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.495186 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g2wvv" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.495213 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g2wvv" event={"ID":"9d9a64bc-8829-4eb8-b992-92f15c06c5cd","Type":"ContainerDied","Data":"fba4b2968632d2bd4cdd0c26e698a48d92c3645d42d2a965a77a8846ddad4b21"} Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.495240 4808 scope.go:117] "RemoveContainer" containerID="0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.517834 4808 scope.go:117] "RemoveContainer" containerID="486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.593642 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g2wvv"] Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.597522 4808 scope.go:117] "RemoveContainer" containerID="bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.624945 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g2wvv"] Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.688872 4808 scope.go:117] "RemoveContainer" containerID="0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94" Feb 17 17:21:53 crc kubenswrapper[4808]: E0217 17:21:53.698091 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94\": container with ID starting with 0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94 not found: ID does not exist" containerID="0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.698139 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94"} err="failed to get container status \"0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94\": rpc error: code = NotFound desc = could not find container \"0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94\": container with ID starting with 0d3a78f5fb095aa39c81dd33f5acf4dc012780fac7bb00799b6830fec08d8d94 not found: ID does not exist" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.698172 4808 scope.go:117] "RemoveContainer" containerID="486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f" Feb 17 17:21:53 crc kubenswrapper[4808]: E0217 17:21:53.701983 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f\": container with ID starting with 486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f not found: ID does not exist" containerID="486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.702030 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f"} err="failed to get container status \"486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f\": rpc error: code = NotFound desc = could not find container \"486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f\": container with ID starting with 486ec7c212bbca48871a09cf79788c0160085756cf021132e3d8b32feaab142f not found: ID does not exist" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.702057 4808 scope.go:117] "RemoveContainer" containerID="bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076" Feb 17 17:21:53 crc kubenswrapper[4808]: E0217 17:21:53.705905 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076\": container with ID starting with bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076 not found: ID does not exist" containerID="bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076" Feb 17 17:21:53 crc kubenswrapper[4808]: I0217 17:21:53.705947 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076"} err="failed to get container status \"bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076\": rpc error: code = NotFound desc = could not find container \"bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076\": container with ID starting with bf062c4b1aac25419c20905ed7b4186bca0dfc1bb2e6718ad6071f72a64f7076 not found: ID does not exist" Feb 17 17:21:55 crc kubenswrapper[4808]: I0217 17:21:55.157392 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" path="/var/lib/kubelet/pods/9d9a64bc-8829-4eb8-b992-92f15c06c5cd/volumes" Feb 17 17:21:58 crc kubenswrapper[4808]: E0217 17:21:58.147569 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:22:03 crc kubenswrapper[4808]: I0217 17:22:03.145890 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:22:03 crc kubenswrapper[4808]: E0217 17:22:03.146747 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:22:03 crc kubenswrapper[4808]: E0217 17:22:03.150239 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:22:10 crc kubenswrapper[4808]: E0217 17:22:10.148731 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:22:10 crc kubenswrapper[4808]: I0217 17:22:10.723821 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6_bb0fef44-0d18-499b-bfd1-c684136b5095/util/0.log" Feb 17 17:22:11 crc kubenswrapper[4808]: I0217 17:22:11.390334 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6_bb0fef44-0d18-499b-bfd1-c684136b5095/util/0.log" Feb 17 17:22:11 crc kubenswrapper[4808]: I0217 17:22:11.444179 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6_bb0fef44-0d18-499b-bfd1-c684136b5095/pull/0.log" Feb 17 17:22:11 crc kubenswrapper[4808]: I0217 17:22:11.454110 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6_bb0fef44-0d18-499b-bfd1-c684136b5095/pull/0.log" Feb 17 17:22:11 crc kubenswrapper[4808]: I0217 17:22:11.674941 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6_bb0fef44-0d18-499b-bfd1-c684136b5095/pull/0.log" Feb 17 17:22:11 crc kubenswrapper[4808]: I0217 17:22:11.733286 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6_bb0fef44-0d18-499b-bfd1-c684136b5095/util/0.log" Feb 17 17:22:11 crc kubenswrapper[4808]: I0217 17:22:11.763708 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3524d026385f13d2f941aad43a715e33399b1aeac0c949f50e011fccd4vwgr6_bb0fef44-0d18-499b-bfd1-c684136b5095/extract/0.log" Feb 17 17:22:12 crc kubenswrapper[4808]: I0217 17:22:12.126750 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-gl97b_e2e1b5f4-7ed2-4ab1-871b-1974a7559252/manager/0.log" Feb 17 17:22:12 crc kubenswrapper[4808]: I0217 17:22:12.467066 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-b7hkk_b622bb16-c5b4-45ea-b493-e681d36d49ac/manager/0.log" Feb 17 17:22:12 crc kubenswrapper[4808]: I0217 17:22:12.687519 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-xv924_d4bd0818-617e-418a-b7c7-f70ba7ebc3d8/manager/0.log" Feb 17 17:22:13 crc kubenswrapper[4808]: I0217 17:22:13.766061 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-plpr2_681f334b-d0ac-43dc-babb-92d9cb7c0440/manager/0.log" Feb 17 17:22:14 crc kubenswrapper[4808]: I0217 17:22:14.145512 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:22:14 crc kubenswrapper[4808]: E0217 17:22:14.146070 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:22:14 crc kubenswrapper[4808]: I0217 17:22:14.408392 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-n6qxn_6508a74d-2dba-4d1b-910c-95c9463c15a4/manager/0.log" Feb 17 17:22:14 crc kubenswrapper[4808]: I0217 17:22:14.418116 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-thpj7_ace1fd54-7ff8-45b9-a77b-c3908044365e/manager/0.log" Feb 17 17:22:14 crc kubenswrapper[4808]: I0217 17:22:14.505479 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-4cv77_77df5d1f-daff-4508-861a-335ab87f2366/manager/0.log" Feb 17 17:22:14 crc kubenswrapper[4808]: I0217 17:22:14.797648 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-8xfc6_96baec58-63b9-49cd-9cf4-32639e58d4ac/manager/0.log" Feb 17 17:22:14 crc kubenswrapper[4808]: I0217 17:22:14.822621 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-tkhr5_93278ccd-52fe-4848-9a46-3f47369d47ab/manager/0.log" Feb 17 17:22:15 crc kubenswrapper[4808]: I0217 17:22:15.111280 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-vgbmj_a40e52a1-9867-413a-81fb-324789e0a009/manager/0.log" Feb 17 17:22:15 crc kubenswrapper[4808]: I0217 17:22:15.214693 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-kg6xx_8d4c91a6-8441-45a6-bb6a-7655ba464fb9/manager/0.log" Feb 17 17:22:15 crc kubenswrapper[4808]: I0217 17:22:15.433871 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-t9k25_a6f8ca14-e1db-4dcc-a64d-7bf137105e80/manager/0.log" Feb 17 17:22:15 crc kubenswrapper[4808]: I0217 17:22:15.577700 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9csf4ws_2ec18a16-766f-4a0c-a393-0ca7a999011e/manager/0.log" Feb 17 17:22:16 crc kubenswrapper[4808]: I0217 17:22:16.103390 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-64549bfd8b-rwgq9_2db6cd8b-961f-442e-8bd4-ced98807709a/operator/0.log" Feb 17 17:22:16 crc kubenswrapper[4808]: I0217 17:22:16.330726 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-75t5f_aa72ff82-f411-42f6-8144-937ca196211b/registry-server/0.log" Feb 17 17:22:16 crc kubenswrapper[4808]: I0217 17:22:16.596659 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-slw7s_6764d3f3-5e9f-4635-973e-81324dbc8e34/manager/0.log" Feb 17 17:22:16 crc kubenswrapper[4808]: I0217 17:22:16.835491 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-5mm2j_0a170b4f-607d-4c7c-bd0c-ee6c29523b44/manager/0.log" Feb 17 17:22:17 crc kubenswrapper[4808]: I0217 17:22:17.094835 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-xcs6n_a83d92da-4f15-4e33-ab57-ae7bc9e0da5e/operator/0.log" Feb 17 17:22:17 crc kubenswrapper[4808]: E0217 17:22:17.158509 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:22:17 crc kubenswrapper[4808]: I0217 17:22:17.337705 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-z4vp8_74dda28c-8860-440c-b97c-b16bab985ff0/manager/0.log" Feb 17 17:22:17 crc kubenswrapper[4808]: I0217 17:22:17.778837 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-zxqhb_b42c0b9b-cca5-4ecb-908e-508fbf932dfe/manager/0.log" Feb 17 17:22:18 crc kubenswrapper[4808]: I0217 17:22:18.200285 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-546d579865-b8s4r_5e47b192-26de-4639-afe8-ec7b5fcc10c8/manager/0.log" Feb 17 17:22:18 crc kubenswrapper[4808]: I0217 17:22:18.448072 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-5qkk2_cde66c49-b3c4-4f4f-b614-c4343d1c3732/manager/0.log" Feb 17 17:22:18 crc kubenswrapper[4808]: I0217 17:22:18.634616 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-66fcc5ff49-dnzp5_bdd19f1d-df45-4dda-a2bd-b14da398e043/manager/0.log" Feb 17 17:22:18 crc kubenswrapper[4808]: I0217 17:22:18.703376 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-xp9sf_a2547c9d-80d6-491d-8517-26327e35a1f4/manager/0.log" Feb 17 17:22:21 crc kubenswrapper[4808]: E0217 17:22:21.147519 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:22:24 crc kubenswrapper[4808]: I0217 17:22:24.606453 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-cjh7p_3e657888-7f8f-4d5d-8ef3-7f7472a7e4fb/manager/0.log" Feb 17 17:22:28 crc kubenswrapper[4808]: E0217 17:22:28.148657 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:22:29 crc kubenswrapper[4808]: I0217 17:22:29.146376 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:22:29 crc kubenswrapper[4808]: E0217 17:22:29.146891 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:22:34 crc kubenswrapper[4808]: E0217 17:22:34.148103 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:22:41 crc kubenswrapper[4808]: I0217 17:22:41.145717 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:22:41 crc kubenswrapper[4808]: E0217 17:22:41.146699 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:22:41 crc kubenswrapper[4808]: I0217 17:22:41.232027 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-t8ws2_94f0bc0d-40c0-45b7-b6c4-7b285ba26c52/control-plane-machine-set-operator/0.log" Feb 17 17:22:41 crc kubenswrapper[4808]: I0217 17:22:41.412078 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-srhjb_656b06bf-9660-4c18-941b-5e5589f0301a/kube-rbac-proxy/0.log" Feb 17 17:22:41 crc kubenswrapper[4808]: I0217 17:22:41.464547 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-srhjb_656b06bf-9660-4c18-941b-5e5589f0301a/machine-api-operator/0.log" Feb 17 17:22:42 crc kubenswrapper[4808]: E0217 17:22:42.148470 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:22:45 crc kubenswrapper[4808]: E0217 17:22:45.147374 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:22:54 crc kubenswrapper[4808]: I0217 17:22:54.506882 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2mptt_e17861f0-9138-4fa1-8fa0-7bd761f1e1bd/cert-manager-controller/0.log" Feb 17 17:22:54 crc kubenswrapper[4808]: I0217 17:22:54.665278 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-cjbd9_f70c72b0-4029-491f-b93e-4b4e52c5bf77/cert-manager-cainjector/0.log" Feb 17 17:22:54 crc kubenswrapper[4808]: I0217 17:22:54.731268 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dgw65_5bcb3c4d-b451-49ff-87b7-7b95830c0628/cert-manager-webhook/0.log" Feb 17 17:22:55 crc kubenswrapper[4808]: I0217 17:22:55.146273 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:22:55 crc kubenswrapper[4808]: E0217 17:22:55.146563 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:22:57 crc kubenswrapper[4808]: E0217 17:22:57.157363 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:22:57 crc kubenswrapper[4808]: E0217 17:22:57.157917 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:23:06 crc kubenswrapper[4808]: I0217 17:23:06.146575 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:23:06 crc kubenswrapper[4808]: E0217 17:23:06.147313 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:23:07 crc kubenswrapper[4808]: I0217 17:23:07.647345 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-48n66_2c731526-11bd-4ef9-bb62-eb3a0512ff1d/nmstate-console-plugin/0.log" Feb 17 17:23:07 crc kubenswrapper[4808]: I0217 17:23:07.862798 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q5xs9_16498191-a001-4403-af35-b76104720e91/nmstate-handler/0.log" Feb 17 17:23:07 crc kubenswrapper[4808]: I0217 17:23:07.913024 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-j8rw5_56fb3ff0-71b6-4792-acdf-33edb0cb23b4/kube-rbac-proxy/0.log" Feb 17 17:23:07 crc kubenswrapper[4808]: I0217 17:23:07.955755 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-j8rw5_56fb3ff0-71b6-4792-acdf-33edb0cb23b4/nmstate-metrics/0.log" Feb 17 17:23:08 crc kubenswrapper[4808]: I0217 17:23:08.090070 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-bjzdq_691d742f-d55e-48e4-89bc-7936f6b31f12/nmstate-operator/0.log" Feb 17 17:23:08 crc kubenswrapper[4808]: I0217 17:23:08.154636 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-vz75q_9f2e1846-9112-48fb-b69e-0a12393c62e6/nmstate-webhook/0.log" Feb 17 17:23:11 crc kubenswrapper[4808]: E0217 17:23:11.149116 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:23:12 crc kubenswrapper[4808]: E0217 17:23:12.148255 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:23:18 crc kubenswrapper[4808]: I0217 17:23:18.145826 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:23:18 crc kubenswrapper[4808]: E0217 17:23:18.146560 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:23:21 crc kubenswrapper[4808]: I0217 17:23:21.830301 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fb78767c-g2qqj_fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec/manager/0.log" Feb 17 17:23:21 crc kubenswrapper[4808]: I0217 17:23:21.894161 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fb78767c-g2qqj_fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec/kube-rbac-proxy/0.log" Feb 17 17:23:22 crc kubenswrapper[4808]: E0217 17:23:22.147867 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:23:23 crc kubenswrapper[4808]: E0217 17:23:23.149018 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:23:30 crc kubenswrapper[4808]: I0217 17:23:30.146261 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:23:30 crc kubenswrapper[4808]: E0217 17:23:30.147470 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:23:34 crc kubenswrapper[4808]: I0217 17:23:34.786015 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-lshnf_038219cb-02e4-4451-b0d4-3e6af1518769/prometheus-operator/0.log" Feb 17 17:23:34 crc kubenswrapper[4808]: I0217 17:23:34.944879 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_2b8a3138-8c3d-434b-9069-8cafc18a0111/prometheus-operator-admission-webhook/0.log" Feb 17 17:23:35 crc kubenswrapper[4808]: I0217 17:23:35.009703 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_6d2656af-cd69-49ff-8d35-7c81fa4c4693/prometheus-operator-admission-webhook/0.log" Feb 17 17:23:35 crc kubenswrapper[4808]: I0217 17:23:35.169548 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7nl9q_c7703980-a631-414f-b3fc-a76dfdd1e085/operator/0.log" Feb 17 17:23:35 crc kubenswrapper[4808]: I0217 17:23:35.208626 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-pkvl8_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab/perses-operator/0.log" Feb 17 17:23:37 crc kubenswrapper[4808]: E0217 17:23:37.153954 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:23:37 crc kubenswrapper[4808]: E0217 17:23:37.153993 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:23:45 crc kubenswrapper[4808]: I0217 17:23:45.145869 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:23:45 crc kubenswrapper[4808]: E0217 17:23:45.146658 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:23:49 crc kubenswrapper[4808]: E0217 17:23:49.148178 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:23:51 crc kubenswrapper[4808]: I0217 17:23:51.454547 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-jvlrt_86420ee7-2594-4ef8-8b9d-05a073118389/kube-rbac-proxy/0.log" Feb 17 17:23:51 crc kubenswrapper[4808]: I0217 17:23:51.701417 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-frr-files/0.log" Feb 17 17:23:51 crc kubenswrapper[4808]: I0217 17:23:51.703008 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-jvlrt_86420ee7-2594-4ef8-8b9d-05a073118389/controller/0.log" Feb 17 17:23:51 crc kubenswrapper[4808]: I0217 17:23:51.956271 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-frr-files/0.log" Feb 17 17:23:51 crc kubenswrapper[4808]: I0217 17:23:51.970410 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-reloader/0.log" Feb 17 17:23:51 crc kubenswrapper[4808]: I0217 17:23:51.983129 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-metrics/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.018854 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-reloader/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: E0217 17:23:52.148849 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.192018 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-metrics/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.225591 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-reloader/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.246100 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-frr-files/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.266159 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-metrics/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.455167 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-metrics/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.468882 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-frr-files/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.498656 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/controller/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.546664 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/cp-reloader/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.659058 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/frr-metrics/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.718846 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/kube-rbac-proxy/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.773313 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/kube-rbac-proxy-frr/0.log" Feb 17 17:23:52 crc kubenswrapper[4808]: I0217 17:23:52.965690 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/reloader/0.log" Feb 17 17:23:53 crc kubenswrapper[4808]: I0217 17:23:53.080455 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-zvr84_b55883d0-d8e0-4609-8b1a-033d6808ab56/frr-k8s-webhook-server/0.log" Feb 17 17:23:53 crc kubenswrapper[4808]: I0217 17:23:53.314941 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6655d59788-74j79_d90f3d87-35f4-4c7d-b157-424ee7b502cd/manager/0.log" Feb 17 17:23:53 crc kubenswrapper[4808]: I0217 17:23:53.500874 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5f74458966-dhjp5_6de38240-7d75-47a0-b5c1-788f619bb8ff/webhook-server/0.log" Feb 17 17:23:53 crc kubenswrapper[4808]: I0217 17:23:53.583900 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2hrgh_c8e5bfe8-d4de-4863-b830-db146a4f0bd8/kube-rbac-proxy/0.log" Feb 17 17:23:54 crc kubenswrapper[4808]: I0217 17:23:54.084333 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-c58vl_42711d14-278f-41eb-80ce-2e67add356b9/frr/0.log" Feb 17 17:23:54 crc kubenswrapper[4808]: I0217 17:23:54.299041 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2hrgh_c8e5bfe8-d4de-4863-b830-db146a4f0bd8/speaker/0.log" Feb 17 17:23:58 crc kubenswrapper[4808]: I0217 17:23:58.146021 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:23:58 crc kubenswrapper[4808]: I0217 17:23:58.728812 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"6a461065a2b0984e9cb114713503f1076e495225fe534e196caafd6860edb08f"} Feb 17 17:24:00 crc kubenswrapper[4808]: E0217 17:24:00.148698 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:24:07 crc kubenswrapper[4808]: E0217 17:24:07.158217 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.020696 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz_da4f14dc-179d-4178-9a9c-747ab825f3e4/util/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.447738 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz_da4f14dc-179d-4178-9a9c-747ab825f3e4/pull/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.450769 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz_da4f14dc-179d-4178-9a9c-747ab825f3e4/pull/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.495336 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz_da4f14dc-179d-4178-9a9c-747ab825f3e4/util/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.595773 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz_da4f14dc-179d-4178-9a9c-747ab825f3e4/extract/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.617534 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz_da4f14dc-179d-4178-9a9c-747ab825f3e4/util/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.643049 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651nnldz_da4f14dc-179d-4178-9a9c-747ab825f3e4/pull/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.828736 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm_11d9feea-2c1d-48e4-9cf4-bde172f9faea/util/0.log" Feb 17 17:24:09 crc kubenswrapper[4808]: I0217 17:24:09.996546 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm_11d9feea-2c1d-48e4-9cf4-bde172f9faea/pull/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.029168 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm_11d9feea-2c1d-48e4-9cf4-bde172f9faea/util/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.054231 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm_11d9feea-2c1d-48e4-9cf4-bde172f9faea/pull/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.236015 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm_11d9feea-2c1d-48e4-9cf4-bde172f9faea/pull/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.248780 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm_11d9feea-2c1d-48e4-9cf4-bde172f9faea/util/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.260906 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gm8bm_11d9feea-2c1d-48e4-9cf4-bde172f9faea/extract/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.411707 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw_df1cf40f-e7a2-40b1-8adb-45d2b5205584/util/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.615798 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw_df1cf40f-e7a2-40b1-8adb-45d2b5205584/util/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.634697 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw_df1cf40f-e7a2-40b1-8adb-45d2b5205584/pull/0.log" Feb 17 17:24:10 crc kubenswrapper[4808]: I0217 17:24:10.683319 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw_df1cf40f-e7a2-40b1-8adb-45d2b5205584/pull/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: E0217 17:24:11.147775 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.349298 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw_df1cf40f-e7a2-40b1-8adb-45d2b5205584/util/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.380501 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw_df1cf40f-e7a2-40b1-8adb-45d2b5205584/extract/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.416820 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213kj6bw_df1cf40f-e7a2-40b1-8adb-45d2b5205584/pull/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.546135 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pgghj_7b0c9cdb-4343-4e20-b099-0f1d04243839/extract-utilities/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.728541 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pgghj_7b0c9cdb-4343-4e20-b099-0f1d04243839/extract-content/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.745561 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pgghj_7b0c9cdb-4343-4e20-b099-0f1d04243839/extract-utilities/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.789421 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pgghj_7b0c9cdb-4343-4e20-b099-0f1d04243839/extract-content/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.927454 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pgghj_7b0c9cdb-4343-4e20-b099-0f1d04243839/extract-utilities/0.log" Feb 17 17:24:11 crc kubenswrapper[4808]: I0217 17:24:11.972963 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pgghj_7b0c9cdb-4343-4e20-b099-0f1d04243839/extract-content/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.193740 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snf82_9b925660-1865-4603-8f8e-f21a1c342f63/extract-utilities/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.394328 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snf82_9b925660-1865-4603-8f8e-f21a1c342f63/extract-utilities/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.431005 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snf82_9b925660-1865-4603-8f8e-f21a1c342f63/extract-content/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.496346 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snf82_9b925660-1865-4603-8f8e-f21a1c342f63/extract-content/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.698802 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pgghj_7b0c9cdb-4343-4e20-b099-0f1d04243839/registry-server/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.724442 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snf82_9b925660-1865-4603-8f8e-f21a1c342f63/extract-content/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.812164 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snf82_9b925660-1865-4603-8f8e-f21a1c342f63/extract-utilities/0.log" Feb 17 17:24:12 crc kubenswrapper[4808]: I0217 17:24:12.976348 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl_5903df73-c7d6-46cf-8aa2-4f0067c08b99/util/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.279878 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl_5903df73-c7d6-46cf-8aa2-4f0067c08b99/util/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.317005 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl_5903df73-c7d6-46cf-8aa2-4f0067c08b99/pull/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.342625 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl_5903df73-c7d6-46cf-8aa2-4f0067c08b99/pull/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.437537 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-snf82_9b925660-1865-4603-8f8e-f21a1c342f63/registry-server/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.538858 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl_5903df73-c7d6-46cf-8aa2-4f0067c08b99/pull/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.539417 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl_5903df73-c7d6-46cf-8aa2-4f0067c08b99/extract/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.557357 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecal9zzl_5903df73-c7d6-46cf-8aa2-4f0067c08b99/util/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.664030 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-v2wfq_012287fd-dda3-4c7b-af1f-576ec2dc479b/marketplace-operator/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.713420 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bbhct_5011758e-a6e4-4491-8ac6-c0a8bcb50568/extract-utilities/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.891403 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bbhct_5011758e-a6e4-4491-8ac6-c0a8bcb50568/extract-content/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.895976 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bbhct_5011758e-a6e4-4491-8ac6-c0a8bcb50568/extract-utilities/0.log" Feb 17 17:24:13 crc kubenswrapper[4808]: I0217 17:24:13.918591 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bbhct_5011758e-a6e4-4491-8ac6-c0a8bcb50568/extract-content/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.086069 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bbhct_5011758e-a6e4-4491-8ac6-c0a8bcb50568/extract-utilities/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.095737 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bbhct_5011758e-a6e4-4491-8ac6-c0a8bcb50568/extract-content/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.102190 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lstjz_bcdfcb0d-7a0d-4cee-a80f-f49f078bef37/extract-utilities/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.261171 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-bbhct_5011758e-a6e4-4491-8ac6-c0a8bcb50568/registry-server/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.399415 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lstjz_bcdfcb0d-7a0d-4cee-a80f-f49f078bef37/extract-content/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.409078 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lstjz_bcdfcb0d-7a0d-4cee-a80f-f49f078bef37/extract-content/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.415977 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lstjz_bcdfcb0d-7a0d-4cee-a80f-f49f078bef37/extract-utilities/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.576637 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lstjz_bcdfcb0d-7a0d-4cee-a80f-f49f078bef37/extract-content/0.log" Feb 17 17:24:14 crc kubenswrapper[4808]: I0217 17:24:14.604438 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lstjz_bcdfcb0d-7a0d-4cee-a80f-f49f078bef37/extract-utilities/0.log" Feb 17 17:24:15 crc kubenswrapper[4808]: I0217 17:24:15.252270 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lstjz_bcdfcb0d-7a0d-4cee-a80f-f49f078bef37/registry-server/0.log" Feb 17 17:24:20 crc kubenswrapper[4808]: E0217 17:24:20.148829 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:24:24 crc kubenswrapper[4808]: E0217 17:24:24.147221 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:24:30 crc kubenswrapper[4808]: I0217 17:24:30.045354 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-lshnf_038219cb-02e4-4451-b0d4-3e6af1518769/prometheus-operator/0.log" Feb 17 17:24:30 crc kubenswrapper[4808]: I0217 17:24:30.054693 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-98b6f68bc-qxc24_6d2656af-cd69-49ff-8d35-7c81fa4c4693/prometheus-operator-admission-webhook/0.log" Feb 17 17:24:30 crc kubenswrapper[4808]: I0217 17:24:30.054710 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-98b6f68bc-j86z5_2b8a3138-8c3d-434b-9069-8cafc18a0111/prometheus-operator-admission-webhook/0.log" Feb 17 17:24:30 crc kubenswrapper[4808]: I0217 17:24:30.224977 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7nl9q_c7703980-a631-414f-b3fc-a76dfdd1e085/operator/0.log" Feb 17 17:24:30 crc kubenswrapper[4808]: I0217 17:24:30.265451 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-pkvl8_b6f5eae7-5253-4562-a5d0-30dfe6e5a8ab/perses-operator/0.log" Feb 17 17:24:34 crc kubenswrapper[4808]: E0217 17:24:34.148351 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:24:35 crc kubenswrapper[4808]: E0217 17:24:35.147362 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.409553 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fb78767c-g2qqj_fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec/manager/0.log" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.414275 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fb78767c-g2qqj_fb7a346a-c0ef-4aa3-bfb0-b111bdef90ec/kube-rbac-proxy/0.log" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.857374 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wlh7l"] Feb 17 17:24:46 crc kubenswrapper[4808]: E0217 17:24:46.858203 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="registry-server" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.858269 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="registry-server" Feb 17 17:24:46 crc kubenswrapper[4808]: E0217 17:24:46.858326 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="extract-utilities" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.858376 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="extract-utilities" Feb 17 17:24:46 crc kubenswrapper[4808]: E0217 17:24:46.858428 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="extract-content" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.858476 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="extract-content" Feb 17 17:24:46 crc kubenswrapper[4808]: E0217 17:24:46.858537 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="registry-server" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.858616 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="registry-server" Feb 17 17:24:46 crc kubenswrapper[4808]: E0217 17:24:46.858678 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="extract-content" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.858727 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="extract-content" Feb 17 17:24:46 crc kubenswrapper[4808]: E0217 17:24:46.858782 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="extract-utilities" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.858831 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="extract-utilities" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.859057 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="40119af6-a3e0-44d6-abc8-df39c96836ac" containerName="registry-server" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.859127 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d9a64bc-8829-4eb8-b992-92f15c06c5cd" containerName="registry-server" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.860531 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:46 crc kubenswrapper[4808]: I0217 17:24:46.873462 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wlh7l"] Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.011648 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-catalog-content\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.011704 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qb2b\" (UniqueName: \"kubernetes.io/projected/c6abeea5-59f7-4b89-a47c-bee82aac4741-kube-api-access-9qb2b\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.012057 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-utilities\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.114030 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-utilities\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.114172 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-catalog-content\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.114197 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qb2b\" (UniqueName: \"kubernetes.io/projected/c6abeea5-59f7-4b89-a47c-bee82aac4741-kube-api-access-9qb2b\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.114693 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-utilities\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.114711 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-catalog-content\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.147547 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qb2b\" (UniqueName: \"kubernetes.io/projected/c6abeea5-59f7-4b89-a47c-bee82aac4741-kube-api-access-9qb2b\") pod \"community-operators-wlh7l\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.195621 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:47 crc kubenswrapper[4808]: I0217 17:24:47.798626 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wlh7l"] Feb 17 17:24:48 crc kubenswrapper[4808]: E0217 17:24:48.148318 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:24:48 crc kubenswrapper[4808]: I0217 17:24:48.209755 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlh7l" event={"ID":"c6abeea5-59f7-4b89-a47c-bee82aac4741","Type":"ContainerStarted","Data":"8a25a6931025d6f6be5fcb2fccd2fda1166a482876723231d8e539131a85c6ff"} Feb 17 17:24:49 crc kubenswrapper[4808]: E0217 17:24:49.148323 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:24:49 crc kubenswrapper[4808]: I0217 17:24:49.219392 4808 generic.go:334] "Generic (PLEG): container finished" podID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerID="2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d" exitCode=0 Feb 17 17:24:49 crc kubenswrapper[4808]: I0217 17:24:49.219432 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlh7l" event={"ID":"c6abeea5-59f7-4b89-a47c-bee82aac4741","Type":"ContainerDied","Data":"2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d"} Feb 17 17:24:50 crc kubenswrapper[4808]: I0217 17:24:50.245014 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlh7l" event={"ID":"c6abeea5-59f7-4b89-a47c-bee82aac4741","Type":"ContainerStarted","Data":"7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd"} Feb 17 17:24:53 crc kubenswrapper[4808]: I0217 17:24:53.295167 4808 generic.go:334] "Generic (PLEG): container finished" podID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerID="7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd" exitCode=0 Feb 17 17:24:53 crc kubenswrapper[4808]: I0217 17:24:53.295265 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlh7l" event={"ID":"c6abeea5-59f7-4b89-a47c-bee82aac4741","Type":"ContainerDied","Data":"7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd"} Feb 17 17:24:54 crc kubenswrapper[4808]: I0217 17:24:54.320140 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlh7l" event={"ID":"c6abeea5-59f7-4b89-a47c-bee82aac4741","Type":"ContainerStarted","Data":"466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede"} Feb 17 17:24:54 crc kubenswrapper[4808]: I0217 17:24:54.366223 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wlh7l" podStartSLOduration=3.857899531 podStartE2EDuration="8.366206873s" podCreationTimestamp="2026-02-17 17:24:46 +0000 UTC" firstStartedPulling="2026-02-17 17:24:49.221373891 +0000 UTC m=+5452.737732964" lastFinishedPulling="2026-02-17 17:24:53.729681223 +0000 UTC m=+5457.246040306" observedRunningTime="2026-02-17 17:24:54.359646655 +0000 UTC m=+5457.876005728" watchObservedRunningTime="2026-02-17 17:24:54.366206873 +0000 UTC m=+5457.882565946" Feb 17 17:24:57 crc kubenswrapper[4808]: I0217 17:24:57.196487 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:57 crc kubenswrapper[4808]: I0217 17:24:57.196928 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:24:58 crc kubenswrapper[4808]: I0217 17:24:58.241839 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wlh7l" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="registry-server" probeResult="failure" output=< Feb 17 17:24:58 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:24:58 crc kubenswrapper[4808]: > Feb 17 17:24:59 crc kubenswrapper[4808]: E0217 17:24:59.146626 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:25:00 crc kubenswrapper[4808]: E0217 17:25:00.147352 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:25:08 crc kubenswrapper[4808]: I0217 17:25:08.250170 4808 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wlh7l" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="registry-server" probeResult="failure" output=< Feb 17 17:25:08 crc kubenswrapper[4808]: timeout: failed to connect service ":50051" within 1s Feb 17 17:25:08 crc kubenswrapper[4808]: > Feb 17 17:25:12 crc kubenswrapper[4808]: E0217 17:25:12.147830 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:25:13 crc kubenswrapper[4808]: E0217 17:25:13.166512 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:25:17 crc kubenswrapper[4808]: I0217 17:25:17.327610 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:25:17 crc kubenswrapper[4808]: I0217 17:25:17.406138 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:25:17 crc kubenswrapper[4808]: I0217 17:25:17.564972 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wlh7l"] Feb 17 17:25:18 crc kubenswrapper[4808]: I0217 17:25:18.561251 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wlh7l" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="registry-server" containerID="cri-o://466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede" gracePeriod=2 Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.171668 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.309321 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-catalog-content\") pod \"c6abeea5-59f7-4b89-a47c-bee82aac4741\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.309393 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-utilities\") pod \"c6abeea5-59f7-4b89-a47c-bee82aac4741\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.309552 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qb2b\" (UniqueName: \"kubernetes.io/projected/c6abeea5-59f7-4b89-a47c-bee82aac4741-kube-api-access-9qb2b\") pod \"c6abeea5-59f7-4b89-a47c-bee82aac4741\" (UID: \"c6abeea5-59f7-4b89-a47c-bee82aac4741\") " Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.310781 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-utilities" (OuterVolumeSpecName: "utilities") pod "c6abeea5-59f7-4b89-a47c-bee82aac4741" (UID: "c6abeea5-59f7-4b89-a47c-bee82aac4741"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.336767 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6abeea5-59f7-4b89-a47c-bee82aac4741-kube-api-access-9qb2b" (OuterVolumeSpecName: "kube-api-access-9qb2b") pod "c6abeea5-59f7-4b89-a47c-bee82aac4741" (UID: "c6abeea5-59f7-4b89-a47c-bee82aac4741"). InnerVolumeSpecName "kube-api-access-9qb2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.393211 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6abeea5-59f7-4b89-a47c-bee82aac4741" (UID: "c6abeea5-59f7-4b89-a47c-bee82aac4741"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.413807 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.413858 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6abeea5-59f7-4b89-a47c-bee82aac4741-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.413879 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qb2b\" (UniqueName: \"kubernetes.io/projected/c6abeea5-59f7-4b89-a47c-bee82aac4741-kube-api-access-9qb2b\") on node \"crc\" DevicePath \"\"" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.577890 4808 generic.go:334] "Generic (PLEG): container finished" podID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerID="466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede" exitCode=0 Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.577954 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlh7l" event={"ID":"c6abeea5-59f7-4b89-a47c-bee82aac4741","Type":"ContainerDied","Data":"466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede"} Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.578011 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wlh7l" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.578036 4808 scope.go:117] "RemoveContainer" containerID="466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.578018 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wlh7l" event={"ID":"c6abeea5-59f7-4b89-a47c-bee82aac4741","Type":"ContainerDied","Data":"8a25a6931025d6f6be5fcb2fccd2fda1166a482876723231d8e539131a85c6ff"} Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.644805 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wlh7l"] Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.646415 4808 scope.go:117] "RemoveContainer" containerID="7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.662125 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wlh7l"] Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.684735 4808 scope.go:117] "RemoveContainer" containerID="2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.737170 4808 scope.go:117] "RemoveContainer" containerID="466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede" Feb 17 17:25:19 crc kubenswrapper[4808]: E0217 17:25:19.742200 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede\": container with ID starting with 466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede not found: ID does not exist" containerID="466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.742256 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede"} err="failed to get container status \"466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede\": rpc error: code = NotFound desc = could not find container \"466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede\": container with ID starting with 466dba8e1e7a633742fe8a6b8681ccced6381d274bc461ee92c102da1aa1eede not found: ID does not exist" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.742291 4808 scope.go:117] "RemoveContainer" containerID="7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd" Feb 17 17:25:19 crc kubenswrapper[4808]: E0217 17:25:19.742854 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd\": container with ID starting with 7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd not found: ID does not exist" containerID="7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.742902 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd"} err="failed to get container status \"7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd\": rpc error: code = NotFound desc = could not find container \"7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd\": container with ID starting with 7e1e70ea95f9af0e5ac87f6cfc8ba3e3136f9ce0e6178ed86a5488af66d3f0fd not found: ID does not exist" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.742933 4808 scope.go:117] "RemoveContainer" containerID="2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d" Feb 17 17:25:19 crc kubenswrapper[4808]: E0217 17:25:19.743416 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d\": container with ID starting with 2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d not found: ID does not exist" containerID="2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d" Feb 17 17:25:19 crc kubenswrapper[4808]: I0217 17:25:19.743458 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d"} err="failed to get container status \"2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d\": rpc error: code = NotFound desc = could not find container \"2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d\": container with ID starting with 2a62b920a605ea8344d4c8c97e6919fa689e4888f2666af6e339c4d1c28a3a0d not found: ID does not exist" Feb 17 17:25:21 crc kubenswrapper[4808]: I0217 17:25:21.163711 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" path="/var/lib/kubelet/pods/c6abeea5-59f7-4b89-a47c-bee82aac4741/volumes" Feb 17 17:25:24 crc kubenswrapper[4808]: E0217 17:25:24.150852 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:25:28 crc kubenswrapper[4808]: E0217 17:25:28.148678 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:25:39 crc kubenswrapper[4808]: I0217 17:25:39.149275 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:25:39 crc kubenswrapper[4808]: E0217 17:25:39.286283 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:25:39 crc kubenswrapper[4808]: E0217 17:25:39.286668 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:25:39 crc kubenswrapper[4808]: E0217 17:25:39.286835 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:25:39 crc kubenswrapper[4808]: E0217 17:25:39.288144 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:25:43 crc kubenswrapper[4808]: E0217 17:25:43.148267 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:25:51 crc kubenswrapper[4808]: E0217 17:25:51.151286 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:25:56 crc kubenswrapper[4808]: E0217 17:25:56.149260 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:26:06 crc kubenswrapper[4808]: E0217 17:26:06.147930 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:26:09 crc kubenswrapper[4808]: E0217 17:26:09.283629 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:26:09 crc kubenswrapper[4808]: E0217 17:26:09.284128 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:26:09 crc kubenswrapper[4808]: E0217 17:26:09.284244 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:26:09 crc kubenswrapper[4808]: E0217 17:26:09.285456 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:26:19 crc kubenswrapper[4808]: I0217 17:26:19.382329 4808 scope.go:117] "RemoveContainer" containerID="ed47e3d22836b6652cf2ffaee8f878d60a025a964ccb085ff32c6031cfeb2f0b" Feb 17 17:26:19 crc kubenswrapper[4808]: I0217 17:26:19.417684 4808 scope.go:117] "RemoveContainer" containerID="9102d6dcaf6e3fbf8c87936c002d9f93bfb04d65b7f6656f4e84306710e44084" Feb 17 17:26:19 crc kubenswrapper[4808]: I0217 17:26:19.448181 4808 scope.go:117] "RemoveContainer" containerID="7d865228fa25e7ce12749d7c2c4de36bd67d5fa5524e81ad097c8a1b40849e1b" Feb 17 17:26:20 crc kubenswrapper[4808]: E0217 17:26:20.147114 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:26:21 crc kubenswrapper[4808]: I0217 17:26:21.591882 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:26:21 crc kubenswrapper[4808]: I0217 17:26:21.592300 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:26:22 crc kubenswrapper[4808]: E0217 17:26:22.150812 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:26:26 crc kubenswrapper[4808]: I0217 17:26:26.629362 4808 generic.go:334] "Generic (PLEG): container finished" podID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerID="c40142ef958d484b3d88ec057c33b3f5b4fdb38dd3e73ba0134c4e1e89733ac2" exitCode=0 Feb 17 17:26:26 crc kubenswrapper[4808]: I0217 17:26:26.629446 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v84wc/must-gather-25mrk" event={"ID":"6431aef1-ada4-4683-967f-18a8a901d3f7","Type":"ContainerDied","Data":"c40142ef958d484b3d88ec057c33b3f5b4fdb38dd3e73ba0134c4e1e89733ac2"} Feb 17 17:26:26 crc kubenswrapper[4808]: I0217 17:26:26.631508 4808 scope.go:117] "RemoveContainer" containerID="c40142ef958d484b3d88ec057c33b3f5b4fdb38dd3e73ba0134c4e1e89733ac2" Feb 17 17:26:27 crc kubenswrapper[4808]: I0217 17:26:27.179251 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v84wc_must-gather-25mrk_6431aef1-ada4-4683-967f-18a8a901d3f7/gather/0.log" Feb 17 17:26:35 crc kubenswrapper[4808]: E0217 17:26:35.155209 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:26:35 crc kubenswrapper[4808]: E0217 17:26:35.155701 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:26:35 crc kubenswrapper[4808]: I0217 17:26:35.460711 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v84wc/must-gather-25mrk"] Feb 17 17:26:35 crc kubenswrapper[4808]: I0217 17:26:35.460998 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-v84wc/must-gather-25mrk" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerName="copy" containerID="cri-o://271d9b2135c3935ec151eefdbaf495f4a45fec452012708df37252c90b672306" gracePeriod=2 Feb 17 17:26:35 crc kubenswrapper[4808]: I0217 17:26:35.473667 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v84wc/must-gather-25mrk"] Feb 17 17:26:35 crc kubenswrapper[4808]: I0217 17:26:35.752448 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v84wc_must-gather-25mrk_6431aef1-ada4-4683-967f-18a8a901d3f7/copy/0.log" Feb 17 17:26:35 crc kubenswrapper[4808]: I0217 17:26:35.753426 4808 generic.go:334] "Generic (PLEG): container finished" podID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerID="271d9b2135c3935ec151eefdbaf495f4a45fec452012708df37252c90b672306" exitCode=143 Feb 17 17:26:35 crc kubenswrapper[4808]: I0217 17:26:35.956641 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v84wc_must-gather-25mrk_6431aef1-ada4-4683-967f-18a8a901d3f7/copy/0.log" Feb 17 17:26:35 crc kubenswrapper[4808]: I0217 17:26:35.957192 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.147564 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4xpd\" (UniqueName: \"kubernetes.io/projected/6431aef1-ada4-4683-967f-18a8a901d3f7-kube-api-access-l4xpd\") pod \"6431aef1-ada4-4683-967f-18a8a901d3f7\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.147941 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6431aef1-ada4-4683-967f-18a8a901d3f7-must-gather-output\") pod \"6431aef1-ada4-4683-967f-18a8a901d3f7\" (UID: \"6431aef1-ada4-4683-967f-18a8a901d3f7\") " Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.161259 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6431aef1-ada4-4683-967f-18a8a901d3f7-kube-api-access-l4xpd" (OuterVolumeSpecName: "kube-api-access-l4xpd") pod "6431aef1-ada4-4683-967f-18a8a901d3f7" (UID: "6431aef1-ada4-4683-967f-18a8a901d3f7"). InnerVolumeSpecName "kube-api-access-l4xpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.251757 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4xpd\" (UniqueName: \"kubernetes.io/projected/6431aef1-ada4-4683-967f-18a8a901d3f7-kube-api-access-l4xpd\") on node \"crc\" DevicePath \"\"" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.356011 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6431aef1-ada4-4683-967f-18a8a901d3f7-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6431aef1-ada4-4683-967f-18a8a901d3f7" (UID: "6431aef1-ada4-4683-967f-18a8a901d3f7"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.456390 4808 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6431aef1-ada4-4683-967f-18a8a901d3f7-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.768006 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v84wc_must-gather-25mrk_6431aef1-ada4-4683-967f-18a8a901d3f7/copy/0.log" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.769000 4808 scope.go:117] "RemoveContainer" containerID="271d9b2135c3935ec151eefdbaf495f4a45fec452012708df37252c90b672306" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.769225 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v84wc/must-gather-25mrk" Feb 17 17:26:36 crc kubenswrapper[4808]: I0217 17:26:36.827806 4808 scope.go:117] "RemoveContainer" containerID="c40142ef958d484b3d88ec057c33b3f5b4fdb38dd3e73ba0134c4e1e89733ac2" Feb 17 17:26:37 crc kubenswrapper[4808]: I0217 17:26:37.157529 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" path="/var/lib/kubelet/pods/6431aef1-ada4-4683-967f-18a8a901d3f7/volumes" Feb 17 17:26:47 crc kubenswrapper[4808]: E0217 17:26:47.197753 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:26:50 crc kubenswrapper[4808]: E0217 17:26:50.148867 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:26:51 crc kubenswrapper[4808]: I0217 17:26:51.592243 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:26:51 crc kubenswrapper[4808]: I0217 17:26:51.592657 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:27:01 crc kubenswrapper[4808]: E0217 17:27:01.148323 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:27:05 crc kubenswrapper[4808]: E0217 17:27:05.148523 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:27:12 crc kubenswrapper[4808]: E0217 17:27:12.171142 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:27:17 crc kubenswrapper[4808]: E0217 17:27:17.161471 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:27:21 crc kubenswrapper[4808]: I0217 17:27:21.592433 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:27:21 crc kubenswrapper[4808]: I0217 17:27:21.593021 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:27:21 crc kubenswrapper[4808]: I0217 17:27:21.593067 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 17:27:21 crc kubenswrapper[4808]: I0217 17:27:21.593899 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a461065a2b0984e9cb114713503f1076e495225fe534e196caafd6860edb08f"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:27:21 crc kubenswrapper[4808]: I0217 17:27:21.593960 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://6a461065a2b0984e9cb114713503f1076e495225fe534e196caafd6860edb08f" gracePeriod=600 Feb 17 17:27:22 crc kubenswrapper[4808]: I0217 17:27:22.254845 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="6a461065a2b0984e9cb114713503f1076e495225fe534e196caafd6860edb08f" exitCode=0 Feb 17 17:27:22 crc kubenswrapper[4808]: I0217 17:27:22.254970 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"6a461065a2b0984e9cb114713503f1076e495225fe534e196caafd6860edb08f"} Feb 17 17:27:22 crc kubenswrapper[4808]: I0217 17:27:22.255424 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerStarted","Data":"21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54"} Feb 17 17:27:22 crc kubenswrapper[4808]: I0217 17:27:22.255448 4808 scope.go:117] "RemoveContainer" containerID="700c3283572281c218af9f0b845d6de62277f81d69443b3b1ffcaa7d804aa22e" Feb 17 17:27:23 crc kubenswrapper[4808]: E0217 17:27:23.148138 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:27:28 crc kubenswrapper[4808]: E0217 17:27:28.149306 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:27:35 crc kubenswrapper[4808]: E0217 17:27:35.149028 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:27:40 crc kubenswrapper[4808]: E0217 17:27:40.149665 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:27:48 crc kubenswrapper[4808]: E0217 17:27:48.148299 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:27:53 crc kubenswrapper[4808]: E0217 17:27:53.149846 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:28:01 crc kubenswrapper[4808]: E0217 17:28:01.149865 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:28:05 crc kubenswrapper[4808]: E0217 17:28:05.150623 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:28:13 crc kubenswrapper[4808]: E0217 17:28:13.150924 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:28:16 crc kubenswrapper[4808]: E0217 17:28:16.146629 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:28:25 crc kubenswrapper[4808]: E0217 17:28:25.148004 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:28:28 crc kubenswrapper[4808]: E0217 17:28:28.154221 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:28:38 crc kubenswrapper[4808]: E0217 17:28:38.148353 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:28:42 crc kubenswrapper[4808]: E0217 17:28:42.149173 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:28:53 crc kubenswrapper[4808]: E0217 17:28:53.148216 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:28:56 crc kubenswrapper[4808]: E0217 17:28:56.149330 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:29:07 crc kubenswrapper[4808]: E0217 17:29:07.152343 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:29:07 crc kubenswrapper[4808]: E0217 17:29:07.152356 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:29:20 crc kubenswrapper[4808]: E0217 17:29:20.149037 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:29:21 crc kubenswrapper[4808]: I0217 17:29:21.592718 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:29:21 crc kubenswrapper[4808]: I0217 17:29:21.593075 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:29:22 crc kubenswrapper[4808]: E0217 17:29:22.147923 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:29:32 crc kubenswrapper[4808]: E0217 17:29:32.148755 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:29:36 crc kubenswrapper[4808]: E0217 17:29:36.148530 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:29:44 crc kubenswrapper[4808]: E0217 17:29:44.150559 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:29:50 crc kubenswrapper[4808]: E0217 17:29:50.148203 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:29:51 crc kubenswrapper[4808]: I0217 17:29:51.591954 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:29:51 crc kubenswrapper[4808]: I0217 17:29:51.592044 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:29:57 crc kubenswrapper[4808]: E0217 17:29:57.159575 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.164931 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4"] Feb 17 17:30:00 crc kubenswrapper[4808]: E0217 17:30:00.165832 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="extract-utilities" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.165851 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="extract-utilities" Feb 17 17:30:00 crc kubenswrapper[4808]: E0217 17:30:00.165877 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="extract-content" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.165885 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="extract-content" Feb 17 17:30:00 crc kubenswrapper[4808]: E0217 17:30:00.165917 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerName="gather" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.165927 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerName="gather" Feb 17 17:30:00 crc kubenswrapper[4808]: E0217 17:30:00.165939 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.165947 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4808]: E0217 17:30:00.165965 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerName="copy" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.165972 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerName="copy" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.166234 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerName="gather" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.166248 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6abeea5-59f7-4b89-a47c-bee82aac4741" containerName="registry-server" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.166261 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="6431aef1-ada4-4683-967f-18a8a901d3f7" containerName="copy" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.167060 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.170417 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.171465 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.180114 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4"] Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.250012 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea831acb-24b6-4b34-9f26-5deb1d134bba-secret-volume\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.250287 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggd6k\" (UniqueName: \"kubernetes.io/projected/ea831acb-24b6-4b34-9f26-5deb1d134bba-kube-api-access-ggd6k\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.250466 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea831acb-24b6-4b34-9f26-5deb1d134bba-config-volume\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.354671 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggd6k\" (UniqueName: \"kubernetes.io/projected/ea831acb-24b6-4b34-9f26-5deb1d134bba-kube-api-access-ggd6k\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.354826 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea831acb-24b6-4b34-9f26-5deb1d134bba-config-volume\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.355280 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea831acb-24b6-4b34-9f26-5deb1d134bba-secret-volume\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.355991 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea831acb-24b6-4b34-9f26-5deb1d134bba-config-volume\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.365296 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea831acb-24b6-4b34-9f26-5deb1d134bba-secret-volume\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.372483 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggd6k\" (UniqueName: \"kubernetes.io/projected/ea831acb-24b6-4b34-9f26-5deb1d134bba-kube-api-access-ggd6k\") pod \"collect-profiles-29522490-vz8d4\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.495208 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:00 crc kubenswrapper[4808]: I0217 17:30:00.957675 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4"] Feb 17 17:30:01 crc kubenswrapper[4808]: I0217 17:30:01.058810 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" event={"ID":"ea831acb-24b6-4b34-9f26-5deb1d134bba","Type":"ContainerStarted","Data":"1f5029ea81d35ef8da22634b533b22242da37444b392ffdc0447ae81517dc0fb"} Feb 17 17:30:02 crc kubenswrapper[4808]: I0217 17:30:02.072442 4808 generic.go:334] "Generic (PLEG): container finished" podID="ea831acb-24b6-4b34-9f26-5deb1d134bba" containerID="5c8cb2f0ac8654a5c60f57179a47aa3c9838af7e2b7c0c647a02c3ef5c293184" exitCode=0 Feb 17 17:30:02 crc kubenswrapper[4808]: I0217 17:30:02.072548 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" event={"ID":"ea831acb-24b6-4b34-9f26-5deb1d134bba","Type":"ContainerDied","Data":"5c8cb2f0ac8654a5c60f57179a47aa3c9838af7e2b7c0c647a02c3ef5c293184"} Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.585274 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.746759 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggd6k\" (UniqueName: \"kubernetes.io/projected/ea831acb-24b6-4b34-9f26-5deb1d134bba-kube-api-access-ggd6k\") pod \"ea831acb-24b6-4b34-9f26-5deb1d134bba\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.746998 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea831acb-24b6-4b34-9f26-5deb1d134bba-secret-volume\") pod \"ea831acb-24b6-4b34-9f26-5deb1d134bba\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.747173 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea831acb-24b6-4b34-9f26-5deb1d134bba-config-volume\") pod \"ea831acb-24b6-4b34-9f26-5deb1d134bba\" (UID: \"ea831acb-24b6-4b34-9f26-5deb1d134bba\") " Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.747960 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea831acb-24b6-4b34-9f26-5deb1d134bba-config-volume" (OuterVolumeSpecName: "config-volume") pod "ea831acb-24b6-4b34-9f26-5deb1d134bba" (UID: "ea831acb-24b6-4b34-9f26-5deb1d134bba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.752982 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea831acb-24b6-4b34-9f26-5deb1d134bba-kube-api-access-ggd6k" (OuterVolumeSpecName: "kube-api-access-ggd6k") pod "ea831acb-24b6-4b34-9f26-5deb1d134bba" (UID: "ea831acb-24b6-4b34-9f26-5deb1d134bba"). InnerVolumeSpecName "kube-api-access-ggd6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.756812 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea831acb-24b6-4b34-9f26-5deb1d134bba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ea831acb-24b6-4b34-9f26-5deb1d134bba" (UID: "ea831acb-24b6-4b34-9f26-5deb1d134bba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.849832 4808 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea831acb-24b6-4b34-9f26-5deb1d134bba-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.849874 4808 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea831acb-24b6-4b34-9f26-5deb1d134bba-config-volume\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:03 crc kubenswrapper[4808]: I0217 17:30:03.849885 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggd6k\" (UniqueName: \"kubernetes.io/projected/ea831acb-24b6-4b34-9f26-5deb1d134bba-kube-api-access-ggd6k\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:04 crc kubenswrapper[4808]: I0217 17:30:04.093153 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" event={"ID":"ea831acb-24b6-4b34-9f26-5deb1d134bba","Type":"ContainerDied","Data":"1f5029ea81d35ef8da22634b533b22242da37444b392ffdc0447ae81517dc0fb"} Feb 17 17:30:04 crc kubenswrapper[4808]: I0217 17:30:04.093513 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f5029ea81d35ef8da22634b533b22242da37444b392ffdc0447ae81517dc0fb" Feb 17 17:30:04 crc kubenswrapper[4808]: I0217 17:30:04.093276 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522490-vz8d4" Feb 17 17:30:04 crc kubenswrapper[4808]: I0217 17:30:04.673455 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld"] Feb 17 17:30:04 crc kubenswrapper[4808]: I0217 17:30:04.685650 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522445-ttsld"] Feb 17 17:30:05 crc kubenswrapper[4808]: E0217 17:30:05.148471 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:30:05 crc kubenswrapper[4808]: I0217 17:30:05.165997 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="450a44d1-3fb2-41f5-9200-59c6c1838c86" path="/var/lib/kubelet/pods/450a44d1-3fb2-41f5-9200-59c6c1838c86/volumes" Feb 17 17:30:12 crc kubenswrapper[4808]: E0217 17:30:12.149015 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:30:17 crc kubenswrapper[4808]: E0217 17:30:17.156450 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:30:19 crc kubenswrapper[4808]: I0217 17:30:19.650678 4808 scope.go:117] "RemoveContainer" containerID="51178eccc89b955640453b414bcd16d1523ac289cf0ed8497a9b4ca6a3ebaa2d" Feb 17 17:30:21 crc kubenswrapper[4808]: I0217 17:30:21.593090 4808 patch_prober.go:28] interesting pod/machine-config-daemon-k8v8k container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 17 17:30:21 crc kubenswrapper[4808]: I0217 17:30:21.593695 4808 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 17 17:30:21 crc kubenswrapper[4808]: I0217 17:30:21.593756 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" Feb 17 17:30:21 crc kubenswrapper[4808]: I0217 17:30:21.594683 4808 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54"} pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 17 17:30:21 crc kubenswrapper[4808]: I0217 17:30:21.594742 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerName="machine-config-daemon" containerID="cri-o://21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" gracePeriod=600 Feb 17 17:30:21 crc kubenswrapper[4808]: E0217 17:30:21.732328 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:30:22 crc kubenswrapper[4808]: I0217 17:30:22.302997 4808 generic.go:334] "Generic (PLEG): container finished" podID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" exitCode=0 Feb 17 17:30:22 crc kubenswrapper[4808]: I0217 17:30:22.303056 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" event={"ID":"ca38b6e7-b21c-453d-8b6c-a163dac84b35","Type":"ContainerDied","Data":"21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54"} Feb 17 17:30:22 crc kubenswrapper[4808]: I0217 17:30:22.303107 4808 scope.go:117] "RemoveContainer" containerID="6a461065a2b0984e9cb114713503f1076e495225fe534e196caafd6860edb08f" Feb 17 17:30:22 crc kubenswrapper[4808]: I0217 17:30:22.304253 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:30:22 crc kubenswrapper[4808]: E0217 17:30:22.304859 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:30:24 crc kubenswrapper[4808]: E0217 17:30:24.148468 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.600628 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hf4ww"] Feb 17 17:30:31 crc kubenswrapper[4808]: E0217 17:30:31.605534 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea831acb-24b6-4b34-9f26-5deb1d134bba" containerName="collect-profiles" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.605559 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea831acb-24b6-4b34-9f26-5deb1d134bba" containerName="collect-profiles" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.606366 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea831acb-24b6-4b34-9f26-5deb1d134bba" containerName="collect-profiles" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.614977 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hf4ww"] Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.615120 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.684677 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-catalog-content\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.684884 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-utilities\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.684963 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjlzz\" (UniqueName: \"kubernetes.io/projected/c342da3e-2aeb-4794-b93b-816f13e8dbf0-kube-api-access-jjlzz\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.787460 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjlzz\" (UniqueName: \"kubernetes.io/projected/c342da3e-2aeb-4794-b93b-816f13e8dbf0-kube-api-access-jjlzz\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.788288 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-catalog-content\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.788454 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-utilities\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.788996 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-utilities\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.789232 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-catalog-content\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.823489 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjlzz\" (UniqueName: \"kubernetes.io/projected/c342da3e-2aeb-4794-b93b-816f13e8dbf0-kube-api-access-jjlzz\") pod \"certified-operators-hf4ww\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:31 crc kubenswrapper[4808]: I0217 17:30:31.944277 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:32 crc kubenswrapper[4808]: E0217 17:30:32.180181 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:30:32 crc kubenswrapper[4808]: I0217 17:30:32.539150 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hf4ww"] Feb 17 17:30:33 crc kubenswrapper[4808]: I0217 17:30:33.449889 4808 generic.go:334] "Generic (PLEG): container finished" podID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerID="87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5" exitCode=0 Feb 17 17:30:33 crc kubenswrapper[4808]: I0217 17:30:33.449944 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf4ww" event={"ID":"c342da3e-2aeb-4794-b93b-816f13e8dbf0","Type":"ContainerDied","Data":"87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5"} Feb 17 17:30:33 crc kubenswrapper[4808]: I0217 17:30:33.450484 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf4ww" event={"ID":"c342da3e-2aeb-4794-b93b-816f13e8dbf0","Type":"ContainerStarted","Data":"d02687da12e1bb2927925182c84d9031a7ea83d264434f7638701e7bfa4e0094"} Feb 17 17:30:34 crc kubenswrapper[4808]: I0217 17:30:34.463960 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf4ww" event={"ID":"c342da3e-2aeb-4794-b93b-816f13e8dbf0","Type":"ContainerStarted","Data":"7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3"} Feb 17 17:30:36 crc kubenswrapper[4808]: I0217 17:30:36.145845 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:30:36 crc kubenswrapper[4808]: E0217 17:30:36.146443 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:30:37 crc kubenswrapper[4808]: I0217 17:30:37.503543 4808 generic.go:334] "Generic (PLEG): container finished" podID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerID="7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3" exitCode=0 Feb 17 17:30:37 crc kubenswrapper[4808]: I0217 17:30:37.503623 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf4ww" event={"ID":"c342da3e-2aeb-4794-b93b-816f13e8dbf0","Type":"ContainerDied","Data":"7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3"} Feb 17 17:30:38 crc kubenswrapper[4808]: I0217 17:30:38.517277 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf4ww" event={"ID":"c342da3e-2aeb-4794-b93b-816f13e8dbf0","Type":"ContainerStarted","Data":"bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56"} Feb 17 17:30:38 crc kubenswrapper[4808]: I0217 17:30:38.541970 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hf4ww" podStartSLOduration=3.07937231 podStartE2EDuration="7.541944944s" podCreationTimestamp="2026-02-17 17:30:31 +0000 UTC" firstStartedPulling="2026-02-17 17:30:33.453084677 +0000 UTC m=+5796.969443750" lastFinishedPulling="2026-02-17 17:30:37.915657271 +0000 UTC m=+5801.432016384" observedRunningTime="2026-02-17 17:30:38.535747836 +0000 UTC m=+5802.052106909" watchObservedRunningTime="2026-02-17 17:30:38.541944944 +0000 UTC m=+5802.058304057" Feb 17 17:30:39 crc kubenswrapper[4808]: E0217 17:30:39.148494 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:30:41 crc kubenswrapper[4808]: I0217 17:30:41.944543 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:41 crc kubenswrapper[4808]: I0217 17:30:41.945185 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:42 crc kubenswrapper[4808]: I0217 17:30:42.019866 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:43 crc kubenswrapper[4808]: I0217 17:30:43.150221 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 17 17:30:43 crc kubenswrapper[4808]: E0217 17:30:43.281450 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:30:43 crc kubenswrapper[4808]: E0217 17:30:43.282164 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Feb 17 17:30:43 crc kubenswrapper[4808]: E0217 17:30:43.282352 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnd2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-zl7nk_openstack(a4b182d0-48fc-4487-b7ad-18f7803a4d4c): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:30:43 crc kubenswrapper[4808]: E0217 17:30:43.283902 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:30:49 crc kubenswrapper[4808]: I0217 17:30:49.146843 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:30:49 crc kubenswrapper[4808]: E0217 17:30:49.147879 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:30:50 crc kubenswrapper[4808]: E0217 17:30:50.148993 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:30:52 crc kubenswrapper[4808]: I0217 17:30:52.031265 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:52 crc kubenswrapper[4808]: I0217 17:30:52.093909 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hf4ww"] Feb 17 17:30:52 crc kubenswrapper[4808]: I0217 17:30:52.684301 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hf4ww" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="registry-server" containerID="cri-o://bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56" gracePeriod=2 Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.240921 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.283365 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjlzz\" (UniqueName: \"kubernetes.io/projected/c342da3e-2aeb-4794-b93b-816f13e8dbf0-kube-api-access-jjlzz\") pod \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.283455 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-utilities\") pod \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.283487 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-catalog-content\") pod \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\" (UID: \"c342da3e-2aeb-4794-b93b-816f13e8dbf0\") " Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.284280 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-utilities" (OuterVolumeSpecName: "utilities") pod "c342da3e-2aeb-4794-b93b-816f13e8dbf0" (UID: "c342da3e-2aeb-4794-b93b-816f13e8dbf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.308705 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c342da3e-2aeb-4794-b93b-816f13e8dbf0-kube-api-access-jjlzz" (OuterVolumeSpecName: "kube-api-access-jjlzz") pod "c342da3e-2aeb-4794-b93b-816f13e8dbf0" (UID: "c342da3e-2aeb-4794-b93b-816f13e8dbf0"). InnerVolumeSpecName "kube-api-access-jjlzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.332946 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c342da3e-2aeb-4794-b93b-816f13e8dbf0" (UID: "c342da3e-2aeb-4794-b93b-816f13e8dbf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.385408 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjlzz\" (UniqueName: \"kubernetes.io/projected/c342da3e-2aeb-4794-b93b-816f13e8dbf0-kube-api-access-jjlzz\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.385442 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.385451 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c342da3e-2aeb-4794-b93b-816f13e8dbf0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.696159 4808 generic.go:334] "Generic (PLEG): container finished" podID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerID="bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56" exitCode=0 Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.696197 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf4ww" event={"ID":"c342da3e-2aeb-4794-b93b-816f13e8dbf0","Type":"ContainerDied","Data":"bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56"} Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.696221 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hf4ww" event={"ID":"c342da3e-2aeb-4794-b93b-816f13e8dbf0","Type":"ContainerDied","Data":"d02687da12e1bb2927925182c84d9031a7ea83d264434f7638701e7bfa4e0094"} Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.696238 4808 scope.go:117] "RemoveContainer" containerID="bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.696337 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hf4ww" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.721721 4808 scope.go:117] "RemoveContainer" containerID="7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.740014 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hf4ww"] Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.749745 4808 scope.go:117] "RemoveContainer" containerID="87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.755819 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hf4ww"] Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.831108 4808 scope.go:117] "RemoveContainer" containerID="bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56" Feb 17 17:30:53 crc kubenswrapper[4808]: E0217 17:30:53.832217 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56\": container with ID starting with bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56 not found: ID does not exist" containerID="bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.832274 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56"} err="failed to get container status \"bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56\": rpc error: code = NotFound desc = could not find container \"bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56\": container with ID starting with bbc852ee41e59782c088b559c60dc802664e0cbe5ae01deaec7b958eda9ffa56 not found: ID does not exist" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.832301 4808 scope.go:117] "RemoveContainer" containerID="7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3" Feb 17 17:30:53 crc kubenswrapper[4808]: E0217 17:30:53.832642 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3\": container with ID starting with 7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3 not found: ID does not exist" containerID="7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.832677 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3"} err="failed to get container status \"7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3\": rpc error: code = NotFound desc = could not find container \"7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3\": container with ID starting with 7507b7f5af13914618689c5517c7b7b310b093cc13096b6d41153324c64071e3 not found: ID does not exist" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.832707 4808 scope.go:117] "RemoveContainer" containerID="87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5" Feb 17 17:30:53 crc kubenswrapper[4808]: E0217 17:30:53.832934 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5\": container with ID starting with 87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5 not found: ID does not exist" containerID="87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5" Feb 17 17:30:53 crc kubenswrapper[4808]: I0217 17:30:53.832961 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5"} err="failed to get container status \"87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5\": rpc error: code = NotFound desc = could not find container \"87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5\": container with ID starting with 87dbbe86e569cdbd049e343ff0348987d288c89683172334820561f2e3545ac5 not found: ID does not exist" Feb 17 17:30:55 crc kubenswrapper[4808]: E0217 17:30:55.147613 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:30:55 crc kubenswrapper[4808]: I0217 17:30:55.159346 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" path="/var/lib/kubelet/pods/c342da3e-2aeb-4794-b93b-816f13e8dbf0/volumes" Feb 17 17:31:02 crc kubenswrapper[4808]: I0217 17:31:02.148873 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:31:02 crc kubenswrapper[4808]: E0217 17:31:02.149922 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:31:04 crc kubenswrapper[4808]: E0217 17:31:04.148744 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:31:06 crc kubenswrapper[4808]: E0217 17:31:06.148520 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:31:15 crc kubenswrapper[4808]: I0217 17:31:15.145512 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:31:15 crc kubenswrapper[4808]: E0217 17:31:15.148058 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:31:16 crc kubenswrapper[4808]: E0217 17:31:16.305538 4808 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:31:16 crc kubenswrapper[4808]: E0217 17:31:16.305869 4808 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 17 17:31:16 crc kubenswrapper[4808]: E0217 17:31:16.306047 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfchb4h678h649h5fbh664h79h7fh666h5bfh68h565h555h59dh5b6h5bfh66ch645h547h5cbh549h9fh58bh5d4hcfh78h68chc7h5ch67dhc7h5b4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjgf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2876084b-7055-449d-9ddb-447d3a515d80): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 17 17:31:16 crc kubenswrapper[4808]: E0217 17:31:16.307624 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:31:20 crc kubenswrapper[4808]: E0217 17:31:20.147998 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:31:28 crc kubenswrapper[4808]: I0217 17:31:28.145816 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:31:28 crc kubenswrapper[4808]: E0217 17:31:28.146660 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:31:28 crc kubenswrapper[4808]: E0217 17:31:28.149237 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:31:32 crc kubenswrapper[4808]: E0217 17:31:32.147813 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:31:40 crc kubenswrapper[4808]: E0217 17:31:40.148759 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:31:42 crc kubenswrapper[4808]: I0217 17:31:42.900811 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6nchh"] Feb 17 17:31:42 crc kubenswrapper[4808]: E0217 17:31:42.901524 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="extract-utilities" Feb 17 17:31:42 crc kubenswrapper[4808]: I0217 17:31:42.901535 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="extract-utilities" Feb 17 17:31:42 crc kubenswrapper[4808]: E0217 17:31:42.901558 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="extract-content" Feb 17 17:31:42 crc kubenswrapper[4808]: I0217 17:31:42.901563 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="extract-content" Feb 17 17:31:42 crc kubenswrapper[4808]: E0217 17:31:42.901592 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="registry-server" Feb 17 17:31:42 crc kubenswrapper[4808]: I0217 17:31:42.901599 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="registry-server" Feb 17 17:31:42 crc kubenswrapper[4808]: I0217 17:31:42.901796 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="c342da3e-2aeb-4794-b93b-816f13e8dbf0" containerName="registry-server" Feb 17 17:31:42 crc kubenswrapper[4808]: I0217 17:31:42.903291 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:42 crc kubenswrapper[4808]: I0217 17:31:42.913348 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nchh"] Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.051141 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-utilities\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.051306 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-catalog-content\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.051580 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnljm\" (UniqueName: \"kubernetes.io/projected/dc288d34-4657-4146-9213-4b9ddfb8269e-kube-api-access-pnljm\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.147050 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:31:43 crc kubenswrapper[4808]: E0217 17:31:43.147336 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.169798 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnljm\" (UniqueName: \"kubernetes.io/projected/dc288d34-4657-4146-9213-4b9ddfb8269e-kube-api-access-pnljm\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.171656 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-utilities\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.171864 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-catalog-content\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.172506 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-utilities\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.180253 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-catalog-content\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.196712 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnljm\" (UniqueName: \"kubernetes.io/projected/dc288d34-4657-4146-9213-4b9ddfb8269e-kube-api-access-pnljm\") pod \"redhat-operators-6nchh\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.229489 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:31:43 crc kubenswrapper[4808]: I0217 17:31:43.740327 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nchh"] Feb 17 17:31:44 crc kubenswrapper[4808]: I0217 17:31:44.435391 4808 generic.go:334] "Generic (PLEG): container finished" podID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerID="6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27" exitCode=0 Feb 17 17:31:44 crc kubenswrapper[4808]: I0217 17:31:44.435685 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nchh" event={"ID":"dc288d34-4657-4146-9213-4b9ddfb8269e","Type":"ContainerDied","Data":"6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27"} Feb 17 17:31:44 crc kubenswrapper[4808]: I0217 17:31:44.435711 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nchh" event={"ID":"dc288d34-4657-4146-9213-4b9ddfb8269e","Type":"ContainerStarted","Data":"503dec0c1b46beb199cdb9b9f8511fa3815bdff8b0c8a0335a7eeadf654cc2ed"} Feb 17 17:31:45 crc kubenswrapper[4808]: E0217 17:31:45.147014 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:31:45 crc kubenswrapper[4808]: I0217 17:31:45.447415 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nchh" event={"ID":"dc288d34-4657-4146-9213-4b9ddfb8269e","Type":"ContainerStarted","Data":"adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3"} Feb 17 17:31:52 crc kubenswrapper[4808]: E0217 17:31:52.149289 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:31:52 crc kubenswrapper[4808]: I0217 17:31:52.552531 4808 generic.go:334] "Generic (PLEG): container finished" podID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerID="adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3" exitCode=0 Feb 17 17:31:52 crc kubenswrapper[4808]: I0217 17:31:52.552743 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nchh" event={"ID":"dc288d34-4657-4146-9213-4b9ddfb8269e","Type":"ContainerDied","Data":"adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3"} Feb 17 17:31:54 crc kubenswrapper[4808]: I0217 17:31:54.574651 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nchh" event={"ID":"dc288d34-4657-4146-9213-4b9ddfb8269e","Type":"ContainerStarted","Data":"6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788"} Feb 17 17:31:54 crc kubenswrapper[4808]: I0217 17:31:54.604453 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6nchh" podStartSLOduration=3.668280819 podStartE2EDuration="12.604431125s" podCreationTimestamp="2026-02-17 17:31:42 +0000 UTC" firstStartedPulling="2026-02-17 17:31:44.439802956 +0000 UTC m=+5867.956162029" lastFinishedPulling="2026-02-17 17:31:53.375953252 +0000 UTC m=+5876.892312335" observedRunningTime="2026-02-17 17:31:54.596190882 +0000 UTC m=+5878.112549985" watchObservedRunningTime="2026-02-17 17:31:54.604431125 +0000 UTC m=+5878.120790238" Feb 17 17:31:56 crc kubenswrapper[4808]: E0217 17:31:56.151906 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:31:58 crc kubenswrapper[4808]: I0217 17:31:58.147432 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:31:58 crc kubenswrapper[4808]: E0217 17:31:58.148165 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:32:03 crc kubenswrapper[4808]: E0217 17:32:03.148341 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:32:03 crc kubenswrapper[4808]: I0217 17:32:03.230755 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:32:03 crc kubenswrapper[4808]: I0217 17:32:03.230960 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:32:03 crc kubenswrapper[4808]: I0217 17:32:03.303590 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:32:03 crc kubenswrapper[4808]: I0217 17:32:03.744024 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:32:03 crc kubenswrapper[4808]: I0217 17:32:03.816121 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nchh"] Feb 17 17:32:05 crc kubenswrapper[4808]: I0217 17:32:05.724292 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6nchh" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="registry-server" containerID="cri-o://6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788" gracePeriod=2 Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.262887 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.314861 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-catalog-content\") pod \"dc288d34-4657-4146-9213-4b9ddfb8269e\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.315063 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-utilities\") pod \"dc288d34-4657-4146-9213-4b9ddfb8269e\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.315161 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnljm\" (UniqueName: \"kubernetes.io/projected/dc288d34-4657-4146-9213-4b9ddfb8269e-kube-api-access-pnljm\") pod \"dc288d34-4657-4146-9213-4b9ddfb8269e\" (UID: \"dc288d34-4657-4146-9213-4b9ddfb8269e\") " Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.316113 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-utilities" (OuterVolumeSpecName: "utilities") pod "dc288d34-4657-4146-9213-4b9ddfb8269e" (UID: "dc288d34-4657-4146-9213-4b9ddfb8269e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.320859 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc288d34-4657-4146-9213-4b9ddfb8269e-kube-api-access-pnljm" (OuterVolumeSpecName: "kube-api-access-pnljm") pod "dc288d34-4657-4146-9213-4b9ddfb8269e" (UID: "dc288d34-4657-4146-9213-4b9ddfb8269e"). InnerVolumeSpecName "kube-api-access-pnljm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.417282 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.417316 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnljm\" (UniqueName: \"kubernetes.io/projected/dc288d34-4657-4146-9213-4b9ddfb8269e-kube-api-access-pnljm\") on node \"crc\" DevicePath \"\"" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.456859 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc288d34-4657-4146-9213-4b9ddfb8269e" (UID: "dc288d34-4657-4146-9213-4b9ddfb8269e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.519084 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc288d34-4657-4146-9213-4b9ddfb8269e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.739473 4808 generic.go:334] "Generic (PLEG): container finished" podID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerID="6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788" exitCode=0 Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.739519 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nchh" event={"ID":"dc288d34-4657-4146-9213-4b9ddfb8269e","Type":"ContainerDied","Data":"6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788"} Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.739549 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nchh" event={"ID":"dc288d34-4657-4146-9213-4b9ddfb8269e","Type":"ContainerDied","Data":"503dec0c1b46beb199cdb9b9f8511fa3815bdff8b0c8a0335a7eeadf654cc2ed"} Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.739659 4808 scope.go:117] "RemoveContainer" containerID="6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.739805 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nchh" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.773242 4808 scope.go:117] "RemoveContainer" containerID="adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.797062 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nchh"] Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.805528 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6nchh"] Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.822202 4808 scope.go:117] "RemoveContainer" containerID="6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.880630 4808 scope.go:117] "RemoveContainer" containerID="6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788" Feb 17 17:32:06 crc kubenswrapper[4808]: E0217 17:32:06.881367 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788\": container with ID starting with 6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788 not found: ID does not exist" containerID="6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.881428 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788"} err="failed to get container status \"6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788\": rpc error: code = NotFound desc = could not find container \"6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788\": container with ID starting with 6d193ffbf1604de340eb5b6e0c29c3b3d546c32e7b55e401a8d84935e3046788 not found: ID does not exist" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.881471 4808 scope.go:117] "RemoveContainer" containerID="adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3" Feb 17 17:32:06 crc kubenswrapper[4808]: E0217 17:32:06.883158 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3\": container with ID starting with adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3 not found: ID does not exist" containerID="adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.883232 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3"} err="failed to get container status \"adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3\": rpc error: code = NotFound desc = could not find container \"adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3\": container with ID starting with adb5c9c079ff69ba0859e6efdd26503a6de0545d31e723b22d47848759e510a3 not found: ID does not exist" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.883275 4808 scope.go:117] "RemoveContainer" containerID="6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27" Feb 17 17:32:06 crc kubenswrapper[4808]: E0217 17:32:06.884128 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27\": container with ID starting with 6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27 not found: ID does not exist" containerID="6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27" Feb 17 17:32:06 crc kubenswrapper[4808]: I0217 17:32:06.884228 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27"} err="failed to get container status \"6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27\": rpc error: code = NotFound desc = could not find container \"6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27\": container with ID starting with 6a6ec8d852babba36bd4fc21db25531a1d10e4476d871b6cc0ea95c93802ba27 not found: ID does not exist" Feb 17 17:32:07 crc kubenswrapper[4808]: E0217 17:32:07.160821 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:32:07 crc kubenswrapper[4808]: I0217 17:32:07.165444 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" path="/var/lib/kubelet/pods/dc288d34-4657-4146-9213-4b9ddfb8269e/volumes" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.370667 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qkmhg"] Feb 17 17:32:09 crc kubenswrapper[4808]: E0217 17:32:09.371651 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="extract-content" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.371673 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="extract-content" Feb 17 17:32:09 crc kubenswrapper[4808]: E0217 17:32:09.371696 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="registry-server" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.371711 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="registry-server" Feb 17 17:32:09 crc kubenswrapper[4808]: E0217 17:32:09.371753 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="extract-utilities" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.371768 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="extract-utilities" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.372146 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc288d34-4657-4146-9213-4b9ddfb8269e" containerName="registry-server" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.375201 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.384975 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkmhg"] Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.398111 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-catalog-content\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.398159 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq9qv\" (UniqueName: \"kubernetes.io/projected/2f28f98e-2752-4bf6-8867-d29f769d6d34-kube-api-access-kq9qv\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.398301 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-utilities\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.499930 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-catalog-content\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.499975 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq9qv\" (UniqueName: \"kubernetes.io/projected/2f28f98e-2752-4bf6-8867-d29f769d6d34-kube-api-access-kq9qv\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.500132 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-utilities\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.500704 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-utilities\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.500736 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-catalog-content\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.525111 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq9qv\" (UniqueName: \"kubernetes.io/projected/2f28f98e-2752-4bf6-8867-d29f769d6d34-kube-api-access-kq9qv\") pod \"redhat-marketplace-qkmhg\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:09 crc kubenswrapper[4808]: I0217 17:32:09.706596 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:10 crc kubenswrapper[4808]: I0217 17:32:10.197337 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkmhg"] Feb 17 17:32:10 crc kubenswrapper[4808]: I0217 17:32:10.797927 4808 generic.go:334] "Generic (PLEG): container finished" podID="2f28f98e-2752-4bf6-8867-d29f769d6d34" containerID="20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6" exitCode=0 Feb 17 17:32:10 crc kubenswrapper[4808]: I0217 17:32:10.798239 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkmhg" event={"ID":"2f28f98e-2752-4bf6-8867-d29f769d6d34","Type":"ContainerDied","Data":"20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6"} Feb 17 17:32:10 crc kubenswrapper[4808]: I0217 17:32:10.798279 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkmhg" event={"ID":"2f28f98e-2752-4bf6-8867-d29f769d6d34","Type":"ContainerStarted","Data":"ea69f1e61af0c69960e28784a7e10b53b9c27388edf075d8aa066d5335b479b7"} Feb 17 17:32:11 crc kubenswrapper[4808]: I0217 17:32:11.146658 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:32:11 crc kubenswrapper[4808]: E0217 17:32:11.147017 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:32:11 crc kubenswrapper[4808]: I0217 17:32:11.810151 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkmhg" event={"ID":"2f28f98e-2752-4bf6-8867-d29f769d6d34","Type":"ContainerStarted","Data":"94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628"} Feb 17 17:32:12 crc kubenswrapper[4808]: I0217 17:32:12.826128 4808 generic.go:334] "Generic (PLEG): container finished" podID="2f28f98e-2752-4bf6-8867-d29f769d6d34" containerID="94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628" exitCode=0 Feb 17 17:32:12 crc kubenswrapper[4808]: I0217 17:32:12.826193 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkmhg" event={"ID":"2f28f98e-2752-4bf6-8867-d29f769d6d34","Type":"ContainerDied","Data":"94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628"} Feb 17 17:32:13 crc kubenswrapper[4808]: I0217 17:32:13.846244 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkmhg" event={"ID":"2f28f98e-2752-4bf6-8867-d29f769d6d34","Type":"ContainerStarted","Data":"1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9"} Feb 17 17:32:13 crc kubenswrapper[4808]: I0217 17:32:13.896434 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qkmhg" podStartSLOduration=2.449945129 podStartE2EDuration="4.896407444s" podCreationTimestamp="2026-02-17 17:32:09 +0000 UTC" firstStartedPulling="2026-02-17 17:32:10.80081569 +0000 UTC m=+5894.317174783" lastFinishedPulling="2026-02-17 17:32:13.247277985 +0000 UTC m=+5896.763637098" observedRunningTime="2026-02-17 17:32:13.882759512 +0000 UTC m=+5897.399118685" watchObservedRunningTime="2026-02-17 17:32:13.896407444 +0000 UTC m=+5897.412766547" Feb 17 17:32:18 crc kubenswrapper[4808]: E0217 17:32:18.149553 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:32:19 crc kubenswrapper[4808]: I0217 17:32:19.706704 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:19 crc kubenswrapper[4808]: I0217 17:32:19.708031 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:19 crc kubenswrapper[4808]: I0217 17:32:19.773242 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:19 crc kubenswrapper[4808]: I0217 17:32:19.960539 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:20 crc kubenswrapper[4808]: I0217 17:32:20.028352 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkmhg"] Feb 17 17:32:21 crc kubenswrapper[4808]: I0217 17:32:21.937630 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qkmhg" podUID="2f28f98e-2752-4bf6-8867-d29f769d6d34" containerName="registry-server" containerID="cri-o://1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9" gracePeriod=2 Feb 17 17:32:22 crc kubenswrapper[4808]: E0217 17:32:22.152039 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.516218 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.599924 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-catalog-content\") pod \"2f28f98e-2752-4bf6-8867-d29f769d6d34\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.599985 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq9qv\" (UniqueName: \"kubernetes.io/projected/2f28f98e-2752-4bf6-8867-d29f769d6d34-kube-api-access-kq9qv\") pod \"2f28f98e-2752-4bf6-8867-d29f769d6d34\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.600126 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-utilities\") pod \"2f28f98e-2752-4bf6-8867-d29f769d6d34\" (UID: \"2f28f98e-2752-4bf6-8867-d29f769d6d34\") " Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.600974 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-utilities" (OuterVolumeSpecName: "utilities") pod "2f28f98e-2752-4bf6-8867-d29f769d6d34" (UID: "2f28f98e-2752-4bf6-8867-d29f769d6d34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.601456 4808 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-utilities\") on node \"crc\" DevicePath \"\"" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.606331 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f28f98e-2752-4bf6-8867-d29f769d6d34-kube-api-access-kq9qv" (OuterVolumeSpecName: "kube-api-access-kq9qv") pod "2f28f98e-2752-4bf6-8867-d29f769d6d34" (UID: "2f28f98e-2752-4bf6-8867-d29f769d6d34"). InnerVolumeSpecName "kube-api-access-kq9qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.635973 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f28f98e-2752-4bf6-8867-d29f769d6d34" (UID: "2f28f98e-2752-4bf6-8867-d29f769d6d34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.704248 4808 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f28f98e-2752-4bf6-8867-d29f769d6d34-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.704287 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq9qv\" (UniqueName: \"kubernetes.io/projected/2f28f98e-2752-4bf6-8867-d29f769d6d34-kube-api-access-kq9qv\") on node \"crc\" DevicePath \"\"" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.954023 4808 generic.go:334] "Generic (PLEG): container finished" podID="2f28f98e-2752-4bf6-8867-d29f769d6d34" containerID="1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9" exitCode=0 Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.954089 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkmhg" event={"ID":"2f28f98e-2752-4bf6-8867-d29f769d6d34","Type":"ContainerDied","Data":"1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9"} Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.954132 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkmhg" event={"ID":"2f28f98e-2752-4bf6-8867-d29f769d6d34","Type":"ContainerDied","Data":"ea69f1e61af0c69960e28784a7e10b53b9c27388edf075d8aa066d5335b479b7"} Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.954161 4808 scope.go:117] "RemoveContainer" containerID="1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.954366 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkmhg" Feb 17 17:32:22 crc kubenswrapper[4808]: I0217 17:32:22.987239 4808 scope.go:117] "RemoveContainer" containerID="94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.020098 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkmhg"] Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.036859 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkmhg"] Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.057028 4808 scope.go:117] "RemoveContainer" containerID="20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.086289 4808 scope.go:117] "RemoveContainer" containerID="1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9" Feb 17 17:32:23 crc kubenswrapper[4808]: E0217 17:32:23.086935 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9\": container with ID starting with 1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9 not found: ID does not exist" containerID="1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.087020 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9"} err="failed to get container status \"1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9\": rpc error: code = NotFound desc = could not find container \"1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9\": container with ID starting with 1a6da2647f99bb4084bd5d2a1f4ae2713b2efc88a90054abaf8302e395ac5ef9 not found: ID does not exist" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.087061 4808 scope.go:117] "RemoveContainer" containerID="94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628" Feb 17 17:32:23 crc kubenswrapper[4808]: E0217 17:32:23.087601 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628\": container with ID starting with 94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628 not found: ID does not exist" containerID="94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.087643 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628"} err="failed to get container status \"94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628\": rpc error: code = NotFound desc = could not find container \"94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628\": container with ID starting with 94dfb34901d9dd0dff5abfc80586e6a30900b46ae8fc9049d4949f08304db628 not found: ID does not exist" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.087671 4808 scope.go:117] "RemoveContainer" containerID="20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6" Feb 17 17:32:23 crc kubenswrapper[4808]: E0217 17:32:23.088236 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6\": container with ID starting with 20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6 not found: ID does not exist" containerID="20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.088331 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6"} err="failed to get container status \"20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6\": rpc error: code = NotFound desc = could not find container \"20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6\": container with ID starting with 20f9253d2c18217469a3b4d06a05e7594eabfa2e4a73524d65b1b7e0e12483f6 not found: ID does not exist" Feb 17 17:32:23 crc kubenswrapper[4808]: I0217 17:32:23.162300 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f28f98e-2752-4bf6-8867-d29f769d6d34" path="/var/lib/kubelet/pods/2f28f98e-2752-4bf6-8867-d29f769d6d34/volumes" Feb 17 17:32:24 crc kubenswrapper[4808]: I0217 17:32:24.146518 4808 scope.go:117] "RemoveContainer" containerID="21cd60b81b7f48724a7b1dc2d7a6a9c6b537ff0cbb1155a7193b7f0c090faf54" Feb 17 17:32:24 crc kubenswrapper[4808]: E0217 17:32:24.147121 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k8v8k_openshift-machine-config-operator(ca38b6e7-b21c-453d-8b6c-a163dac84b35)\"" pod="openshift-machine-config-operator/machine-config-daemon-k8v8k" podUID="ca38b6e7-b21c-453d-8b6c-a163dac84b35" Feb 17 17:32:30 crc kubenswrapper[4808]: E0217 17:32:30.150075 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="2876084b-7055-449d-9ddb-447d3a515d80" Feb 17 17:32:34 crc kubenswrapper[4808]: E0217 17:32:34.149208 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-zl7nk" podUID="a4b182d0-48fc-4487-b7ad-18f7803a4d4c" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145123075024450 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145123076017366 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145107124016505 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145107124015455 5ustar corecore